Director, Perception & Brain Dynamics Laboratory
@nyu
; Cognitive neuroscientist interested in how the brain generates the mind; “He” rhymes with “the” /hə/
My article summarizing our recent workshop on consciousness at the NIH (June 2023) is now out in
@NeuroCellPress
!
50 d free-access link:
This article presents a vision for consciousness research ahead that is more empirically grounded & broader in scope.
Tenured at NYU.
I'd like to thank all those who wrote me letters. I'm beyond grateful to my past and present mentors, mentees, colleagues and peers who have inspired and challenged me, and made me into a better scientist.
My lab currently has an opening for a research assistant (official title: Research Associate). Successful applicant will have the opportunity to be involved in cutting-edge human perceptual neuroscience projects using a range of techniques, including 7T fMRI, MEG, EEG, and ECoG.
So excited that my review on consciousness is now available online
@TrendsCognSci
! This paper summarizes my thoughts on the neural basis of conscious awareness over the past few years, and lays out a vision for consciousness research ahead.
The recordings from our consciousness workshop held at the NIH in June can now be viewed on demand!
This is a treasure trove for anyone interested in learning more about consciousness research, and anyone teaching a course this fall that touches on this topic.
links below👇
Our lab currently has an open postdoc position in the field of perceptual neuroscience.
Specific questions: w flexibility, and a focus on vision;
Techniques: open (fMRI, E/MEG, iEEG, stim, modeling);
Salary: $70,000 or higher (w excellent benefits);
Start date: flexible in 2024
Excited to share a new paper from our lab
@NatureComms
:
"Neural integration underlying naturalistic prediction flexibly adapts to varying sensory input rate."
How do humans make predictions about future sensory input in the natural environment? 1/6
Announcing our lab's new paper
@NatureComms
: Long-term priors influence visual perception through recruitment of long-range feedback. Led by the amazing
@regg3
.
A large iEEG study of bistable visual perception, focusing on prior's influence on perception
IMHO the recent 124-author open letter has done immeasurable damage to the field. To outsiders and students pondering the viability of this field: There are many reputable C scientists who did not sign this letter; and some of the signees do not actually work in the field.
Our lab received an NIH R01 award to study the role of spontaneous brain activity in perception!
We now have two open post-doctoral positions that can start immediately (w flexible start date).
Possible techniques include 7T, M/EEG, ECoG.
Details here:
New lab paper out
@eLife
led by
@ellapod
:
We reveal prominent covariation of spontaneous pupil dynamics and large-scale cortical activity in humans at rest and during task, and its implications for perceptual decision making & conscious perception.
What are the neural mechanisms underlying conscious object recognition? And how are they different from unconscious processing of the same sensory input? Read our new paper
@NatureComms
to find out:
Amazing work by
@maxglev
&
@ellapod
,
@NYULH_Neuro
1/6
A recent guest speaker to our lab said they often puzzle about the origins of aperiodic (1/f) activity in E/MEG/ECoG . In this paper, we showed that random recurrent networks—a common building block of cortical circuitry—provides a parsimonious explanation
Excited to announce our new paper
@PNASNews
:
"One-trial perceptual learning in the absence of conscious remembering and independent of the medial temporal lobe."
Work in collaboration with the Squire lab
@UCSD
.
1/6
We currently have an open post-doc position in my lab at NYU to study neural mechanisms of human perception. Experience with advanced neuroimaging data analyses and/or large-scale computational modeling especially desirable. Come join our awesome team!
Excited to share new paper from our lab:
Neural oscillations promoting perceptual stability and perceptual memory during bistable perception
led by Mike Zhu and
@regg3
, it all started from Mike's undergraduate senior thesis
@NYU_CNS
and
@NYULH_Neuro
!
The He lab at NYU is looking for a bright post-doctoral candidate to join us in early 2021 to study perception & prediction. Experience with human neuroimaging (broadly defined) and computational skills desired. Email/DM me
@OHBM
@CogCompNeuro
@CogNeuroNews
Our new paper is out @
#JNeurosci
: Prior knowledge from one-shot perceptual learning sharpens neural representations across the cortical hierarchy, via suppression in visual areas and selective enhancement in the frontoparietal and default-mode networks.
Thrilled and humbled to receive the prestigious Vilcek Prize for Creative Promise in Biomedical Science! I’d like to thank all my mentors, past and current mentees, and brilliant colleagues at Wash U, NIH and NYU who made this journey possible.
@NYULH_Neuro
@cai2r
@NYU_CNS
An advice to young scientists: No need to include a photo in your CV when applying for jobs. It's distracting and not useful. I used to conduct phone interviews and people would show up in the lab as a trainee/employee without me knowing how they looked like. It worked great.
Announcing new paper from our lab out today
@eLife
, led by the amazing
@regg3
, ably assisted by
@MattFlounders
and Mike Zhu, and conducted
@NYULH_Neuro
.
"Frequency-specific neural signatures of perceptual content and perceptual stability"
Announcing new paper from our lab led by
@YuanhaoWu
:
In this study, we combine 7T fMRI and MEG to investigate the spatiotemporal evolution of neural dynamics and its representational format during conscious visual recognition in a challenging visual task.
Check out our new open-access review article
@NeuroConsc
, with the amazing
@baror_shira
:
Here we advocate for a more naturalistic approach to studying the neural basis of conscious perception.
Announcing the "Next Frontiers in Consciousness Research" Workshop held at the NIH on June 26-28, 2023!
In-person attendance is invitation only, but there are a limited number of slots for trainees to present posters (with an $800 travel stipend each).
MAIN 2021 is delighted to announce that Biyu Jade He
@BiyuHe
from the Neuroscience Institute, NYU will be giving a talk entitled: "Predictive mechanisms in perception".
Sign-up for free today at !
Please RT!
Day 1:
Day 2:
Day 3:
Workshop website:
Many thanks again to all the speakers who came in person or spoke virtually, and all the staff and program officers who made this a reality!!
Our final
#OHBM2020
Keynote Series interview: Rachael Strickland
@drstick_ray
talks with Biyu He
@BiyuHe
about her upcoming talk “From Resting State to Conscious Perception”, her research, career evolution, and neural basis of conscious perception.
Neither PsyArxiv preprint had substantial scientific or philosophical arguments supporting the label of "pseudoscience". As many have said, calling your opponents pseudoscientists with enormous peer and social media pressure is not the way to go.
We are better than this!
Our workshop is now advertised on NIH BRAIN Initiative's website!
Register for online attendance here:
(no registration cost)
An event co-organized with
@Wokkinho
@ayaka_hach
@sharifkronemer
and more.
Announcing the "Next Frontiers in Consciousness Research" Workshop held at the NIH on June 26-28, 2023!
In-person attendance is invitation only, but there are a limited number of slots for trainees to present posters (with an $800 travel stipend each).
I don’t get this. Why would anyone pay $2K to get a preprint commented by 2-3 experts, if it doesn’t result in a “peer-reviewed publication”? Why not just post it on bioRxiv and leave it to any of the free review servers? Or post it on one’s own website and open a comment thread?
Today, we’re introducing a new model that eliminates accept/reject decisions.
By publishing every paper with eLife reviews as a Reviewed Preprint, we plan to restore autonomy to authors, ensuring that they will be judged by what, not where, they publish.
With a TWCF grant,
@BiyuHe
@NYUGrossman
will measure
#brain
activity as humans observe stimuli w/different levels of certainty. This research aims to test competing theories about the role of the prefrontal cortex in conscious perception:
#consciousness
Join us to hear Dr Biyu Jade He's talk on "Predictive (broadly defined) Mechanisms in Perception"
@BiyuHe
To read the full abstract and request to join, follow the link:
Had a wonderful day visiting Columbia University Dept of Psychology today! Lots of exciting scientific discussions. Many thanks to
@SpagnaPhD
for being such a gracious host!!
Only 3 days left to submit an abstract to this exciting workshop! 👇 A limited number of abstracts will be selected as poster presentations at the in-person event, with each presenter provided with a $800 travel award. No registration cost for in-person or online participation.
Announcing the "Next Frontiers in Consciousness Research" Workshop held at the NIH on June 26-28, 2023!
In-person attendance is invitation only, but there are a limited number of slots for trainees to present posters (with an $800 travel stipend each).
This position is still open. The post-doc fellow can gain expertise in a range of cool human neuroimaging techniques (7T fMRI, M/EEG, ECoG), and work on fascinating questions about perception w flexibility in research topic and ample career dev opportunities
@NYULH_Neuro
. Pls RT
In addition, I advocate for stronger bridges between consciousness science and adjacent fields of cognitive neuroscience, such as memory, emotion, and executive control.
For the whole story, read the full article, free open access at the above link for 50 days.
How can we better understand the way humans learn from single events? A $1.2 million grant will enable researchers
@BiyuHe
&
@ekoermann
to explore the neural networks behind the process known as one-shot perceptual learning.
Read more about their work ⤵️
In this paper, I argue for a pluralistic neurobiological approach to understanding consciousness.
I also propose a new framework, the Joint Determinant Theory (JDT), that is capable of accommodating different brain-circuit mechanisms for a wide range of conscious contents.
Qualified candidates would have a bachelor or masters degree in neuroscience, psychology, cognitive (neuro)science or adjacent fields, and a strong interest in human neuroscientific questions. We are based in NYC and the position comes with competitive salary and benefits.
To demonstrate the damage, I just received this from an esteemed colleague: "[...]lets see where the IIT debate has settled then...mainly concerned for my PhDs/postdocs being turned off by the field"
A list of lab alumni who previously held this position can be found here: . They all had at least one first-author paper from their time in the lab in journals including PNAS, eLife and Nat Commun. They have also won multiple national awards.
To inquire, please send an email to the PI, with a cover letter in the email body and attach your CV. Use subject line “2022 RA inquiry”. Please also indicate whether you would need a work visa. We have some flexibility in the start date, but fall 2022 is preferred. Thanks for RT
Analyzing directed information flow between electrodes, we found strengthened corticocortical feedback during the preferred percept that is congruent with the long-term prior. Conversely, there was strengthened feedforward input during the non-preferred percept.
@abhidwarakanath
This is unfair, Abhi. Some of my data present direct challenges to IIT, and Giulio invited me as a *neutral* scientist to speak at his summer school, knowing I will present those data and their implications. Speaking of ignoring contrary data, which camp is not guilty?
In bistable perception triggered by ambiguous images, often one of the two perceptual interpretations is more prevalent, reflecting an individual-specific long-term prior that is stable over time. E.g., people tend to perceive the Necker cube as viewed from the top more often.
For detailed discussions, read our open-access paper! This work was conducted
@NYULH_Neuro
, in collaboration with colleagues at NYU Epilepsy center including
@dfriedman36
,
@adeenflinker
and many others who are not on here. We are grateful to the support by an
@NSF
CAREER award.
@CyrilRPernet
The 1/f fluctuations in behavior and 1/f fluctuations in brain activity are likely related. I discussed this link in a 2014 TiCS review: , and a more recent (2018) review in J Neurosci:
We are doing a similar research study at NYU Langone! If you have PD (or a related diagnosis) and hallucinations, and live in NYC/tristate area, please get in touch with myself or
@YuanhaoWu
. Our study is open to almost all ages, and we will compensate you for your time.
Do you have Parkinson's? Are you interested in volunteering for a brain scan? We are looking for people to help us understand how hallucinations happen in Parkinson's. See
@ParkinsonsUK
for more info thanks
@RobbieRinder
for helping spread the word
In sum, while hippocampus/MTL is central to one-shot learning in episodic memory ('where did I park my car this morning?'), it is not needed for one-shot perceptual learning. One-shot perceptual learning does not depend on conscious remembering or the MTL.
Some more background:
A review with
@realAlexHuk
about this general framework:
A review about 1/f broadband activity:
Our model about its sources:
Where it all started for me:
6/6
I'm happy to be starting my postdoc with
@BiyuHe
at
@NYULH_Neuro
today! Excited to work on projects examining theories of conscious perception and prediction in the brain 🧠
You would be embedded in one of the most vibrant neuroscience communities in the world, including
@NYULH_Neuro
and
@NYU_CNS
, with world-class neuroimaging infrastructure
@cai2r
. A commitment of at least 2 years is needed.
We obtained direct cortical recordings from 14 neurosurgical patients as they reported their continuously changing perceptual content while viewing the Necker cube and Rubin face-vase illusion. Data from the two images were separately analyzed, providing a reproducibility check.
Check out our new paper at
@NatureComms
where we show that bifurcation dynamics might be a signature of consciousness, even in the absence of report:
Summary thread below ⬇️
These results suggest that long-range corticocortical feedback can underlie long-term priors' influence on perception, and inform future theories about the role of priors in perception.
@hakwanlau
@thandrillon
@ADemertzi
@theASSC
@ASSC27tokyo
@kanair
Whether you like the theory or not, and the people or not, IIT has inspired beautiful experiments such as those of Massimini’s. The core of the theory might not be empirically testable yet, but how do you test the “pointer” idea in your theory, Hakwan?
Join us for the upcoming Feindel virtual seminar with Biyu Jade He (
@BiyuHe
), a cognitive neuroscientist from
@nyuniversity
. She will give an overview of her lab's recent work on neural mechanisms of conscious visual perception. Register now:
Attending VSS in the next few days? Come check out presentations from our lab!
Talk by
@ayaka_hach
: Mapping the invariance properties of perceptual priors in one-shot perceptual learning.
Tuesday 5/23, 8:15 – 9:45 am, Talk Room 1
Session: Plasticity and Learning 2,
#51
.11
Our findings strongly support the 'informational bottleneck' hypothesis. Moreover, the length of neural integration of sensory history correlates with an individual subject's predictive performance. This provides an individual-based neural marker of naturalistic prediction. 5/6
Congrats to all the trainees selected for travel awards for the upcoming consciousness workshop at the NIH on June 26-28! And many thanks to
@d_soto_b
@KalinaChristoff
@SimaMofakham
@fanispa
and Yuri Saalmann for serving on the reviewing committee!!
Only 3 days left to submit an abstract to this exciting workshop! 👇 A limited number of abstracts will be selected as poster presentations at the in-person event, with each presenter provided with a $800 travel award. No registration cost for in-person or online participation.
A computational model incorporating hierarchical-predictive-coding and attractor-network elements reproduced both behavioral and neural findings. In this model, a bias at the top level propagates down the hierarchy, and a prediction error signal propagates up.
What is the neural substrate of one-shot perceptual learning? How is it that once you recognized the dog in the Dalmatian Dog picture, from then on, you automatically and effortlessly perceive the dog the moment the same picture is presented to you? 2/6
Abstract submission deadline is *April 3rd*.
The event will be live streamed. Registration for online attendance is now OPEN.
For detailed information, check out the workshop website👆. Additional inquiries can be sent to the workshop email address listed on the website.
3) When a phase-scrambled image was presented, participants sometimes experienced false perceptions, accompanied by enhanced ACC/IFJ activation & DMN deactivation, all with decodable information. These regions may subserve top-down inference under increased sensory uncertainty.
Thanks to everyone who sent me congrats!!! I was under a deadline yesterday and wasn’t able to write a personalized reply to each msg that was coming in. But I saw them all and THANK YOU to all of your messages!!!
Pre-stimulus pupil size on a given trial predicted conscious recognition, categorization performance, and the quality of neural representation of sensory input.
Prior work from our lab () showed that 1/f aperiodic (aka arrhythmic) brain activity tracks dynamical stimuli with 1/f-type temporal statistics, and contains an evolving prediction about future sensory input based on integration of past sensory input. 3/6
In sum, distinct frequency-domain neural signatures supporting perceptual content and perceptual memory!
We gratefully acknowledge support by
@NSF
,
@NatEyeInstitute
and the Irma T. Hirschl Trust.
Using resting-state activity and pre-stimulus baseline during task, we found that spontaneous pupil size fluctuations co-varied with cortical activity power in most frequency bands across multiple large-scale resting-state networks.
Together, these results reveal rich and heterogeneous representational dynamics across large-scale brain networks during conscious visual object recognition when sensory input has a relatively high level of uncertainty.
We thank the support by
@NIHFunding
and
@NYULH_Neuro
.
We presented visual images at a participant’s perceptual threshold, such that the same image, when repeatedly presented, is sometimes consciously recognized and at other times not. We used 7T fMRI to record brain activity and probe the underlying neural mechanisms. Key findings:
Dynamical stimuli in natural environments contain temporal correlations over a range of time scales, manifesting as 1/f-type power spectra. How humans exploit such natural statistical regularities to make predictions remains unknown. 2/6
Another point I should have highlighted: We used a novel state space analysis approach to simultaneously extract multivariate neural patterns supporting different behavioral metrics in the same task. This method can be easily adapted to other tasks. Read our paper to learn more!
The above findings were obtained with relatively slow fluctuations of pupil size. Fast, momentary pupil constriction/dilation also correlated with MEG activity, with time courses similar to single-unit findings in animal models, although without significant effects on behavior.
Recognition outcome could be reliably decoded from MEG signals starting at 200 ms after image onset, long before significant decoding of stimulus category at 470 ms. Such late emergency of category information is very different from that observed during "core object recognition".
@hakwanlau
@thandrillon
@ADemertzi
@theASSC
In fact, the ASSC board held a vote in July, and the consensus was to *not* make a public statement about the 25 years of Consciousness event at the recent annual meeting, which was the trigger for the open letter.
@GregoryHickok
Von Stein proposed this idea in her 2000 PNAS paper based on intracortical recordings in the cat visual cortex. Buzsaki did also in his brain rhythms book.
1) Visual images, whether consciously recognized or not, (de)activate similar large-scale (cortical and subcortical) brain networks, with almost the same spatial extent. This suggests that the key neural mechanism for conscious recognition is not recruitment of additional areas.
Adding
@lay_naaaa
to this thread, who worked tirelessly on this project as an
@NYU_CNS
undergrad, and is now a PhD student and a Marian Diamond fellow
@UCBerkeley
!
@schoppik
Yep, this one: , thanks to
@smfleming
who also mentioned it. This one is a follow-up that digs deeper. Btw, here "over time" was very fast & robust, it's a one-shot perceptual learning paradigm.
2) Information about recognition content could be decoded from activity patterns in ~all of the activated and deactivated cortical regions, but not subcortical regions. In unrecognized trials, neither the stimulus nor the reported category was decodable in the same brain regions.