• Brainthrough
  • Posts
  • Lazy Sunday #1 - Mouse VR, EEG into words and VC's comfort zone

Lazy Sunday #1 - Mouse VR, EEG into words and VC's comfort zone

While mice get new tech, we won't be able to hide thoughts anymore. And why DeWave and NOIR might have huge impact on healthcare of the future.

Welcome to the first edition of “Lazy Sunday”, a hand-picked list of research, discoveries and developments in neuroscience, tech and management. Going forward, each weekend, straight into your inbox. Exciting!

Note from the editor (me): as this is the first of many, things might break, look slightly off and all the other things that might happen. If you find anything, please do let me know. Brainthrough will get more elegant each week.

Now, back to why we are actually here. We are not that far away from machines reading our minds. Or using our minds to work with machines. Or being able to brain train. There are also some funky gadgets. Definitely some good small talk openers for your last Christmas parties.

What’s news

A new way to translate brain signals into actions allows to move robots with thoughts

By measuring electrical activity in the brain (via EEG, here is an explainer), algorithms can effectively detect three main parameters: object (e.g. toy), intention (pick or drop) and direction (up / down etc.). These parameters can move robotic arms without the need for physical intervention. Just think, robot does. The impact on healthcare could be immense. 👴 👵

It might also be helpful to get chocolate without getting off the couch one day.

While rather slow and, well…robotic, it is impressive. Have a look at the short videos. (Project Github).

DeWave translates EEG readings into words

Let us stay with EEG: researchers at the University of Technology Sydney have developed their own model GPT called DeWave. It analyses EEG data to derive the exact words a person thinks. While deriving words from brain signals is not necessarily new (earlier research used fMRI data) it seems to be the first of its kind using EEG (easier to set up and measure). I expect many more non-intrusive (no drill into skull) models powered by AI in the future. And AI is better at sifting through EEG noise. No hiding thoughts anymore. 🤷‍♂️

Not less important to consider: this technology could be used in totally different ways. Helping people, who lost their ability to speak to communicate again, paraplegics to move machine powered skeletons to move again. Or be used for better marketing research. Or be used for interrogations or even spying? You will never be able to lie again.

As we are progressing our understanding of electrical signals in our brain, it will be time to think about some boundaries (think regulations) we might need to put in place. Otherwise, regulators will play catch up yet again. And I don’t want my thoughts to be read at every passport control. (University of Technology Sydney)

No drilling needed! Ultrasound could help to direct medication inside our brains

Medicating our brain is difficult. Reaching specific areas is almost impossible without surgical intervention. Earlier trials used tiny magnetic vehicles (like bubbles) into and through patient’s brains to target areas. The problem is, that magnetic bubbles aren't really eco-friendly and biodegradable. New sound-directed lipid bubbles are a healthier choice. This could change treatments of strokes, cancer or other conditions, as it also allows for different types of medication, faster delivery, and lower dosage as application is much more pinpointed. (ETH Zürich)

On top of tech and biz

There has always been a trend: hardware and software becomes smaller (less metal, less code) to achieve the same outcome. The economic impact is significant: cheaper to build / maintain, cheaper to carry around / distribute / store, easier to implement and use, and on top of it, each incremental reduction in complexity increases the scope of applications. VR for mice, anyone?

Talking about size and tracking: first VR headset for your mouse

Why do we have to run around with Apple Vision headsets when there are clearly smaller options? In any case, this opens big opportunities for further research. And it just looks cute.

Welcome to wellness tracking level 5: All-Day Eye Tracking Glasses for improved wellness and productivity

Cars have it. Your research lab has it. Eye tracking technologies. Cars use it at a basic level to figure when you are about to fall asleep. MindLink has come up with a tracking device, you can wear all day to help you track your mental energy levels, increase your productivity and overall wellness. This is an interesting diversion from the typical heart rate and step trackers out there. (Yahoo / MindLink)

Monitoring your brain exercise: Mendi on the forefront of affordable and useful neurofeedback for your home

One of the reasons I am so excited about neuroscience in combination with powerful hardware getting smaller are exactly these use cases. While you can play memory games and do Sudoku 5 times a week, you would hope your brain gets fitter. But you don’t see any gains, you don’t see losing weight, your brain does not get bigger. Long story short, neuroplasticity (your brain’s ability to rewire, learn etc.) is invisible. Neurofeedback technology could, no - sorry, will add a whole new layer of wearables.

Mendi is Sweden based, crowdfunded and EU endorsed. Even Reddit users are pretty happy with the device. (No, I am not affiliated with them, they are not sponsoring…yet 😅) (Mendi / Research Paper)

Microsoft reduced size of models to get same outcome

“More is not always the best” seems to become true in the world of LLMs. Fewer parameters, means less energy consumption and faster computation = ⬆️ efficiency. Some might fear smaller models are easier to hide (?) or could be less transparent. But for consumers, this might also mean, that the smaller models become, the easier it would be to run GPT-4 like GAI on device.

Bye Siri!! (Semafor)

Can technology help us understand the minds of others?

Some thoughts on “mind reading”, less ala “The Mentalist”, but more through fMRI and EEG. But it is a question, philosophers and researchers have been trying to understand for many decades: do we perceive the same? Does the same food taste different to different individuals? Would the same food be perceived differently by the same person depending on circumstances? Would the same sales pitch be perceived better in a different location? (MITReview)

“This line of research is still in its infancy, but it suggests that neuroscience might one day do more than simply telling us what someone else is experiencing. By using deep neural networks, the team was able to bring its subjects’ hallucinations out into the world, where anyone could share in them.”

Grace Huckins

When in doubt revert to “known”

Making investment decisions, especially in high risk environments like venture capital, often requires dealing with larger unknowns (call them risks). The more complex the environment (deep tech, AI, bio), the more decisions are being made based on parameters that are constant.

In this case founders pedigree and aptitude / attitude. As a VC you can’t know the technical details of all models, chemical process and so on. But you always work with founders.

Our brains are very good at converting at speed on known (already experienced) levels. It is one reason why hiring experienced professionals for high growth companies makes sense: they bring structure, they pull out playbooks, they play the playbooks to avoid risks and pitfalls. Been there, done that.

Hire a seasoned professionals in early stage of a company and you might end up getting over-crafted processes and missed opportunities to really do something new (aka risky, but wow).

100,000 years ago staying within a known environment, meant higher chance of physical survival. Today we know how to work against it, get out of comfort zones. But ultimately we drift back to “known”.

Some brain basics will never disappear.

Misc but not least…

Morre than binary: 1 concept - 5 different levels explaining the Connectome.

Being able to explain complex topics depending on your audience is a master skill. Zooming in or out at the right time to keep your opposite engaged is relevant as a founder, in sales or customer success, engineering etc. It is also an excellent question to use when interviewing talent for a specific role. If you mastered the knowledge, explaining at different levels becomes easier.

Bobby Kasthuri is an amazing example of switching between “levels”. You try to explain the connectome to a 5 year old.

If you read this: thank you!

Got anything, that should be mentioned in here? Tell me.

Have you thought of somebody, who could find this interesting as well? Feel free to share this link: https://bit.ly/3Tt1Mfh 🙏 

Bye,

Alex

Reply

or to participate.