Yu (Andy) Huang


| Google Scholar | Scopus | Mendeley | ResearchGate | ORCID | Publons | LinkedIn | github | CV |

Contact: andypotatohy at gmail.com

If you don't want to read my bio below, here's a list of quick links to my major work:

  • 2013 Automated MRI Segmentation for Individualized HD-tDCS Targeting
  • 2014 Morphologically and Anatomically accuRate Segmentation (MARS)
  • 2015 The New York Head
  • 2016 Validation of TES models using in vivo intracranial human data
  • 2017 A fully automated Realistic vOlumetric-Approach-based Simulator for Transcranial electric stimulation (ROAST)
  • 2018 Multi-electrodes transcranial electric stimulation can reach deep targets (DeepTES)
  • 2019 Optimized Interferential Stimulation of Human Brains
  • 2020-now Cracking AI in radiology, seeking grants, trying to survive academia...

  • Short Bio:

    Entered BME by accident in 2003, I thought I was going to do physics, but destiny led me to BME. And I started doing research in the area of Neural Engineering in 2007, by a bigger accident (I just thought it's cool if people can control stuff by just using their mind). I began looking into brain stimulation in 2010, because I found that it's quite hard to control things just by mind. And I did my Phd in brain stimulation, technically "Computational Models of Current Flow in Transcranial Electrical Stimulation", right, that's the title of my Phd thesis. It starts with an effort to make things automated so that people (especially clinicians) can quickly have a sophisticated model of the brain under stimulation (how quick? Now a couple of hours if you don't care about the tiny details). You can download all these work here. Then I felt nerdy on the image segmentation part when making the model. I found it not perfect "theoretically", so I digged into the math and figured things out. I added a morphological constraint (fancy term simply means your face cannot be inside your brain) to the algorithm and it turned out it improved the segmentation results. Unfortunately it heavily relies on the training data so I myself rarely used it ever since it's published. But you can still find it here if you feel nerdy too. Then I came to know about the so-called "standard head", an average head anatomy from a group of people. So why not make a model out of it? I grabbed the brain from standard head 1, took the skull and soft tissues from standard head 2, and the jaw and neck from standard head 3, and yeah, a monster head (or Frankenstein) was created. I collaborated with a German guy on evaluating this creepy head by looking at "targeted stimulation" and "source localization", and it turned out it's better than using any arbitrary individual's head. Since the work was done in New York (we had a great time, lots of beer), we named it New York head, and you can get it for free. Then we realized that we have had so many different models ever since WWII, but nobody knows if they tell the truth. Thanks to my advisor who knows a doctor at NYU Medical Center who has an army of patients where we can get the real recordings in the brain. Then I started making models for these patients and compared with the recordings, and it shows the models are really telling the truth about the distribution of electric field under stimulation. The work was recently published here, with a short video showing off the pretty models. I recently released a free, fully-automated pipeline combining segmentation, FEM meshing and solving into a single Matlab package, providing a realistic, volumetric, and end-to-end solution for the current-flow modeling. It's fast and easy-to-use. It's named ROAST and you can download it here.

    So that's all the major work I've been doing. Now I'm trying to switch to the most popular area of using AI to help cancer detection...