Differential Neurotechnology Development
Summary
Neurotechnological advances have been incredible over the past few years. Combined with AI, neurotechnology could become a force for good, but also comes with its associated risks. To maximise the benefits that neurotechnology could bring to society (including curing neuro disorders, providing insight into subjective experience, and making sure AI is developed safely), it is important to consider which neurotechnologies to accelerate, and how.
Uncertainty
The content of this article is largely based on research by Milan Cvitkovic and 80,000 Hours. Due to the exploratory nature of this article, we feel less confident in the recommendations in this article than in our other articles.
With the development of more and more powerful AI, it has been argued that transformative AI that can automate the human activities needed to speed up scientific and technological advancement will be developed this century. As a result, it may be better to focus on making sure that this transformative AI is aligned with human values to ensure that the technology that is developed by the transformative AI benefits humanity.
Published: 30 Jan 2024 by Jessica Wen
Cause area overview
What is neurotechnology?
Neurotechnology refers to any tool that directly observes the brain or nervous system activity, or affects brain or nervous system function. Some familiar examples could include (functional) Magnetic Resonance Imaging, brain-computer interfaces, and antidepressant drugs.
Current capabilities and risks
Neurotechnology is already surprisingly advanced, and is being used to convict criminals and develop brain-to-brain communication. Here are some examples:
Table 1: Some current neurotechnology capabilities and risks (from Cvitkovic, Cause Area: Differential Neurotechnology Development, 2022)
Neurotechnology | Description | Examples | Risks |
---|---|---|---|
Small molecule drugs |
Diffuse through neural tissue and physically interact with biological substrates on the nanometer scale |
|
Health risks of commercial drugs tend to be well-studied. |
Electroencephalography (EEG) |
Non-invasively records voltage changes caused by neural activity with electrodes on the scalp with centimetre spatial (though superficial) and millisecond temporal resolution |
Potential use in workplace surveillance |
|
Electrocorticography (ECoG) |
Invasively records voltage changes caused by neural activity with electrodes on or in the cortex with millimeter spatial and millisecond temporal resolution |
|
Invasive implants may produce serious side-effects |
(functional) Magnetic Resonance Imaging ((f)MRI) |
Non-invasively image brain structure or (if fMRI) activity using changes in the local magnetic fields of atoms in neural tissue, typically on the millimeter spatial and second temporal scale |
A study with fMRI plus GPT-1 showed that they could decode brain activity into words – developing mind reading could lead to possible privacy violations |
|
Transcranial Magnetic Stimulation (TMS) |
Noninvasively stimulates neuron firing via electromagnetic induction, mostly on the surface of the brain, at centimeter spatial and 100s of millisecond temporal resolution. FDA-approved for treatment-resistant depression, anxiety, OCD, and smoking cessation |
Any stimulation device may be at risk of hacking, resulting in a privacy violation and a risk of the person’s resulting behaviour not being their own |
|
and more... |
See Cvitkovic, Cause Area: Differential Neurotechnology Development, 2022 |
|
|
The potential impacts of neurotechnology
Treating neurological and neuropsychiatric disorders
Pharmaceutical treatment methods have been used since the early 20th century. The R&D in neurotechnology has long been focused on treating neurological and neuropsychiatric disorders (which are not the same – neurological disorders generally refer to diseases like Alzheimer’s, Parkinson’s, or epilepsy, which have observable pathologies of the structure or activity of neurons. Neuropsychiatric disorders include diseases like depression and ADHD that don’t, yet – but we will collectively call them “neuro disorders” here).
This study estimated that 16% of the 2019 global burden of disease can be attributable to mental illness, so being able to treat these disorders with new neurotechnologies could be incredibly beneficial.
Issues with identity
The use of neurotechnology often changes the users’ personalities and behaviours. This brings up questions around identity and self-hood, what degree of cognitive and emotional alterations are acceptable, and other ethical problems. Neurotechnological interventions being developed now are, “explicitly and implicitly, orientated toward the concept of personhood” (Müller and Rotter, 2017).
Direct manipulation of subjective wellbeing
The potential impact of neurotechnology on subjective wellbeing is substantial, given that our current understanding of wellbeing is that it is intricately tied to the physiological state of one’s nervous system. This implies that future neurotechnologies could alleviate suffering and enhance subjective wellbeing. Treatments for depression and chronic pain serve as evidence that subjective wellbeing isn't solely contingent on external circumstances, and our day-to-day experience with unnecessary and unproductive suffering (e.g. work stress, crippling guilt, unhealthy habits, etc) highlights the prevalence of unnecessary suffering in human life. This is not to even mention the worse suffering experienced by non-human animals.
The upper limits of achieving positive subjective wellbeing through direct manipulation of brain states remain unknown, raising questions about what defines a good life in a society with advanced neurotechnology. However, quantifying the value of increased subjective wellbeing poses challenges, as existing metrics like DALYs are not designed for this purpose. Measures such as WELLBYs based on self-reports face calibration issues (e.g. assuming that life satisfaction is bounded when reporting it on a 0-10 scale), presenting subjective wellbeing measurement as an ongoing challenge.
Despite the positive aspirations, concerns arise about the darker side of neurotechnology. Substance abuse exemplifies how neurotechnologies can cause suffering, with economic impacts on top reaching billions of USD annually. Substance abuse is an example of wireheading, which is the direct manipulation of pleasure or motivation systems in the brain usually in a way that is harmful. Advanced neurotechnology might also be weaponized for malicious purposes, inflicting intense pain while preventing outward signs of suffering. This duality highlights the uncertainty surrounding the potential extremes of both positive and negative impacts on subjective wellbeing through advanced neurotechnologies.
Enhancement and value shift
Neurotechnology has historically been developed for military purposes, especially around enhancement (e.g. Air Force pilots were some of the earliest test cases for modafinil, the drug for wakefulness). There are still neurotechnology developments for military applications, including devices employing transcranial direct-current stimulation to enhance learning during target identification exercises. Many other human abilities could be enhanced, including improving impulse control, improving introspection, increasing wisdom and better reasoning, improving energy levels, and many more, including more speculative enhancements that could be better described as new abilities. A widespread increase in wisdom and rationality could lead to a better society, but there is a risk that neurotechnologies that seem like enhancements could degrade individuals’ or society’s reasoning ability (analogous to how the internet promised access to the world’s knowledge but ended up being a vector for misinformation).
More concerningly, neurotechnology intrinsically manipulates a user’s beliefs and motivation systems, which are interconnected with the person’s values. This value shift could occur inadvertently (e.g. a neurotechnology designed to increase empathy might unintentionally lead to a society tolerating antisocial behaviour) or with malicious intent (e.g. imagine if re-education camps were made 99% effective). This is not to even mention the morality of using neurotechnology to reform criminal behaviour.
Addressing these concerns requires navigating complex moral and legal dilemmas. How should society respond if an individual unintentionally adopts values they didn't originally desire but later wish to maintain? Additionally, in a world where brains communicate directly, distinguishing between persuasion and coercion becomes challenging. While governance, public education, and responsible neurotechnology development may mitigate undesirable value shifts, it could be hard to resist the potential side effects of desirable neurotechnologies.
Consciousness and Welfarism
Advances in neurotechnology may allow us to gather more information to understand the experience of non-human animals and determine who deserves moral consideration. It could help us answer questions about consciousness like those in Open Philanthropy’s 2017 Report on Consciousness and Moral Patienthood. This area of study is called welfarism.
AI safety
Neurotechnology presents potential benefits and risks in the realm of AI safety. These are outlined below:
Modelling collective human values
Neurotechnology could potentially provide rich data on human values for training AI systems.
It could be possible to obtain moral judgments and subjective well-being data for improved AI alignment.
This data could be used to train models that emulate individual human value judgments, which could be used as a "moral parliament" for advanced AI systems.
Algorithm design inspired by the brain
A lot of AI development is already based on brain cognition research.
In the future, it could be possible to emulate aspects of the human brain's operation, such as neural algorithms or even whole-brain emulation. These simulations could be used to make sure AI systems are aligned to human values through mimicry or testing.
Improving human cognition
Individual reasoning and communication abilities could be enhanced through neurotechnology.
Coordination problems could be understood and navigated better, potentially improving society and how AI is trained.
It could be possible to build hybrid human-AI systems where key decision-making or goal-setting parts of the architecture are delegated to circuitry in real human brains.
Risks
Neurotechnology could provide AIs with another tool (if compromised) to manipulate human behaviour and values.
Increasing neuroscientific knowledge could accelerate the development of transformative AI, which we as a society might not be ready for.
Further research is essential to assess the impact of specific neurotechnologies on AI safety and to compare them to other AI safety strategies.
How can engineers contribute to differential neurotechnology development?
What are the bottlenecks?
Building beneficial neurotechnology (neurobiology, biomedical and bioengineering, electrical engineering, mechanical engineering, and medical device engineering are most relevant and sought-after skills)
All neurotechnology development aims to benefit humanity, but differential neurotechnology development specifically refers to proactively creating the most beneficial or risk-reducing neurotechnologies before others. Neuralink is the most prominent example (where they aim to “merge” humans and AI to tackle AI safety concerns).
Further research is required to assess the contributions of other funders, legislation, companies (aside from the $363M investment in Neuralink) and neuroethicists to differential neurotechnology development.
If you are confident in your assessments of the most beneficial neurotechnologies, you could build them yourself (or fund their creation). Convergent Research is launching some Focused Research Organisations to develop longtermist-focused neurotechnology.
Stewarding intellectual property
Private actors with control over key intellectual property in neurotechnology have a significant influence on the use of that technology (unlike in AI algorithm development). Historically, this has led to patents from smaller companies being acquired and shelved by large companies (thus preventing a lot of potentially promising neurotechnology development).
A more beneficial IP environment could be achieved by a philanthropist funding a patent pool (doesn’t exist yet for neurotechnology), which would cost less than $100k, and can introduce potential funders to parties willing to do this work. There will be problems if there is one company in the market that has outsized market power, so setting this up needs to be done carefully.
Building infrastructure for beneficial neurotechnology research
Philanthropy could facilitate differential neurotechnology development by building better R&D infrastructure and prevent coordination failures.
Top-quality open-source software is an example of infrastructure that could make the operation of neurotechnologies transparent to the public, improve security, and avoid multi-homing costs.
Establishing clinical cohorts for testing unproven neurotechnologies, particularly those targeting non-disease applications (this requires a clinical cohort of healthy subjects, which is much harder to recruit), is important for development acceleration. An example is 1Day Sooner for healthy subject recruitment for vaccines.
Producing a biobank of neural tissue samples or models could facilitate research on neurological disease.
Advocacy/government
Encourage funders, such as the NIH in the US and private foundations, to prioritise research funding for (differentially) useful neurotechnologies.
Engage with regulatory bodies, like the FDA and DOJ in the US, to influence the categorization and regulation of neurotechnologies, avoiding potential setbacks.
Advocate within industry groups like the IEEE Neurotechnologies for Brain Interfacing Group in the US to shape future industry standards for neurotechnologies.
Target advocacy efforts towards surgical boards and other relevant medical bodies to promote the acceptance of neurotechnologies with broad beneficial applications.
Fund research into identifying the best policy and regulatory levers, potentially through an organisation like CSET.
Funding research
The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative in the US is basically the only US government funding devoted specifically to neurotechnology.
Donors dedicated to specific diseases frequently support projects that involve characterizing diseases or conducting fundamental research, rather than investing in the development of neurotechnologies (e.g. the Michael J. Fox Foundation for Parkinson’s Research)
Some of the funding for neuroscience research ends up funding neurotechnology research, but it is not easy to estimate how much or what is researched. Examples of neuroscience research institutes include:
Two Max Planck Institutes (one on Neuroscience, one on Brain Research)
The Howard Hughes Medical Institute’s Janelia Research Campus
Some disease-specific groups like the Michael J. Fox Foundation for Parkinsons’s Research
There are almost no neurotechnologists who work on neurotechnology for AI safety. This could be a huge gap for funders to fill.
Other research approaches that could be excellent opportunities to fund are listed here.
Career moves
It is possible to bridge from engineering disciplines (such as mechanical, electrical, robotics, materials, etc.) to neurotechnology by working on medical devices. This can be done through a large company like Siemens or GE, or a startup.
Another way to work on neurotechnology research is through becoming an expert in imaging.
Perhaps a more direct route is by doing a Neuroscience Master’s.
Risks, pitfalls, and things to keep in mind
The cost-effectiveness of neurotechnology development for treating neuro disorders needs to be assessed (it may be more cost-effective to fund malaria prevention instead, for example).
If you are interested in neurotechnology development to tackle AI safety, the risk-benefit trade-off needs to be considered.
Neurotechnologies could increase the ability of AI to manipulate human behaviour and values (increased “attack surface”).
Increasing our understanding of the brain could accelerate the development of AI capabilities. Are AIs built with this knowledge more or less dangerous than AIs built without?
Can neurotechnologies be developed fast enough to be meaningfully beneficial for AI safety?
There is a risk that bringing attention to the current and future capabilities of neuroscience could cause bad actors to realise the potential of neurotechnology. Attention hazards and info hazards are prevalent in many other new technologies. See our page on biosecurity for our guidance in navigating this thorny subject.
Learn more
Additional resources
Cvitkovic, Cause Area: Differential Neurotechnology Development, 2022
80,000 Hours Podcast: Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers
Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) 2025: A Scientific Vision
Byrnes, Intro to Brain-like AGI Safety, 2022
Relevant organisations
Neuro Landscape has a great a list of groups and companies/organisations to consider working at. Aside from the examples included in Table 1, here are some more from Cvitkovic, 2022:
Deep Brain Stimulation (DBS):
Peripheral and spinal nerve stimulation:
LivaNova (vagus nerve stimulation, approved for epilepsy and depression)
Cala Health (median and radial nerve stimulation, approved for tremor)
Precision Novi (spinal cord stimulation, approved for chronic intractable pain)
Surgical tools (too many to name, but important ones include):
Neurovascular stents
Stereotactic surgical equipment
Cochlear implants:
Intracortical motor BCI:
Functional ultrasound neuroimaging:
Transcranial ultrasound stimulation:
(functional) Near-Infrared Spectroscopy:
Openwater: use ultrasound in combination with NIRS to get better spatial resolution and deeper signals than standard NIRS
Multispeckle diffuse correlation spectroscopy from Reality Labs (Meta)
Research institutes:
Two Max Planck Institutes (one on Neuroscience, one on Brain Research)
The Howard Hughes Medical Institute’s Janelia Research Campus
Some disease-specific groups like the Michael J. Fox Foundation for Parkinsons’s Research