Toggle light / dark theme

With ChatGPT, OpenAI is currently testing a dialog-based general-purpose language model. According to cognitive scientist Gary Marcus, ChatGPT is just a foretaste of GPT-4.

Rumors about GPT-4 have been floating around the web for weeks, and they have two things in common: GPT-4 is supposed to outperform GPT-3 and ChatGPT significantly and be released relatively soon in the spring.

OpenAI is currently running a joint grant program with Microsoft, whose participants likely already have access to GPT-4. Microsoft CTO Scott Stein recently predicted an even more significant AI year in 2023.

The first open source equivalent of OpenAI’s ChatGPT has arrived, but good luck running it on your laptop — or at all.

This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta’s Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.

But PaLM + RLHF isn’t pre-trained. That is to say, the system hasn’t been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won’t magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.

Summary: Findings support modern thought that neural networks store information by making short-term alterations to the synapses. The study sheds new light on short-term synaptic plasticity in recent memory storage.

Source: picower institute for learning and memory.

Between the time you read the Wi-Fi password off the café’s menu board and the time you can get back to your laptop to enter it, you have to hold it in mind. If you’ve ever wondered how your brain does that, you are asking a question about working memory that has researchers have strived for decades to explain. Now MIT neuroscientists have published a key new insight to explain how it works.

A security researcher was awarded a bug bounty of $107,500 for identifying security issues in Google Home smart speakers that could be exploited to install backdoors and turn them into wiretapping devices.

The flaws “allowed an attacker within wireless proximity to install a ‘backdoor’ account on the device, enabling them to send commands to it remotely over the internet, access its microphone feed, and make arbitrary HTTP requests within the victim’s LAN,” the researcher, who goes by the name Matt, disclosed in a technical write-up published this week.

In making such malicious requests, not only could the Wi-Fi password get exposed, but also provide the adversary direct access to other devices connected to the same network. Following responsible disclosure on January 8, 2021, the issues were remediated by Google in April 2021.

One afternoon in the fall of 2019, in a grand old office building near the Arc de Triomphe, I was buzzed through an unmarked door into a showroom for the future of surveillance. The space on the other side was dark and sleek, with a look somewhere between an Apple Store and a doomsday bunker. Along one wall, a grid of electronic devices glinted in the moody downlighting—automated license plate readers, Wi-Fi-enabled locks, boxy data processing units. I was here to meet Giovanni Gaccione, who runs the public safety division of a security technology company called Genetec. Headquartered in Montreal, the firm operates four of these “Experience Centers” around the world, where it peddles intelligence products to government officials. Genetec’s main sell here was software, and Gaccione had agreed to show me how it worked.

He led me first to a large monitor running a demo version of Citigraf, his division’s flagship product. The screen displayed a map of the East Side of Chicago. Around the edges were thumbnail-size video streams from neighborhood CCTV cameras. In one feed, a woman appeared to be unloading luggage from a car to the sidewalk. An alert popped up above her head: “ILLEGAL PARKING.” The map itself was scattered with color-coded icons—a house on fire, a gun, a pair of wrestling stick figures—each of which, Gaccione explained, corresponded to an unfolding emergency. He selected the stick figures, which denoted an assault, and a readout appeared onscreen with a few scant details drawn from the 911 dispatch center. At the bottom was a button marked “INVESTIGATE,” just begging to be clicked.