Toggle light / dark theme

Non-personalized content and ads are influenced by things like the content you’re currently viewing and your location (ad serving is based on general location). Personalized content and ads can also include things like video recommendations, a customized YouTube homepage, and tailored ads based on past activity, like the videos you watch and the things you search for on YouTube. We also use cookies and data to tailor the experience to be age-appropriate, if relevant.

Select “More options” to see additional information, including details about managing your privacy settings. You can also visit g.co/privacytools at any time.

We present a developmental atlas that offers insight into sequential epigenetic changes underlying early human brain development modeled in organoids, which reconstructs the differentiation trajectories of all major CNS regions. It shows that epigenetic regulation via the installation of activating histone marks precedes activation of groups of neuronal genes.

To capture a broader understanding of memory encoding, we expanded our experiments to include two other stimulus types: colors and face pictures (see Materials and Methods). Both monkeys demonstrated high accuracy in memorizing grating orientations in the “orientation DMTS” task, colors in the “color DMTS” task, and face pictures in the “face DMTS” task [DP: ~94% and DQ: ~87% versus 50%, all P < 0.01 (one-sample t test)] (fig. S1), indicating that they had been well trained.

We implanted a Utah array in each monkey’s V1 area (see Materials and Methods; Fig. 1B) and presented the stimuli onto the receptive field (RF) centers of the recorded neurons (fig. S2, A and D). This enabled simultaneous monitoring of neuronal activity in our experiments. Our analyses focused primarily on neuronal activity before probe stimulus onset.

Representative neuronal responses for two of the VWM content conditions in the orientation DMTS task at a selected electrode are shown in Fig. 1C. During the stimulus period (0 to 200 ms after cue onset), neurons displayed distinct firing patterns between the two content conditions (90° or 180° orientation). An off-response emerged following the cue offset, and activity gradually diminished. During the delay period, defined as 700 to 1,700 ms after cue onset (the thick gray line in Fig. 1C), neurons also exhibited a significant difference in firing rate between the two content conditions (N = 1,810 trials for 90°; N = 1,865 trials for 180°; all marked positions P < 0.01) without any behavioral performance bias (N = 16 sessions, P = 0.94; right panel in Fig. 1C). The difference in response between these two content conditions during the delay period at the same electrode was less prominent in incorrect-response trials and in the fixation task (Fig. 1D).

One of the ambitions of computational neuroscience is that we will continue to make improvements in the field of artificial intelligence that will be informed by advances in our understanding of how the brains of various species evolved to process information. To that end, here the authors propose an expanded version of the Turing test that involves embodied sensorimotor interactions with the world as a new framework for accelerating progress in artificial intelligence.

Nearly all the neural networks that power modern artificial intelligence tools such as ChatGPT are based on a 1960s-era computational model of a living neuron. A new model developed at the Flatiron Institute’s Center for Computational Neuroscience (CCN) suggests that this decades-old approximation doesn’t capture all the computational abilities that real neurons possess and that this older model is potentially holding back AI development.