Are we ready for chips in our brains?

[En Español]

Anna is an Army veteran who served her country and recently lost her arm due to the blast from an improvised explosive device. While this is a life changing event for Anna, current technology in the US allows her to get a robotic arm controlled by a computer chip implanted in her brain. The chip uses electrodes to parse through the electrical signals emitted from brain cells called neurons and uses artificial intelligence (AI) to decode the signals into Anna’s intended movements to move the robotic arm accordingly. This device is one example of a Brain Computing Interface (BCI). As Anna gets used to the prosthetic over the following months, it begins to feel like an actual part of her. One day, as Anna is driving, her prosthetic arm jerks suddenly. This causes the car to swerve and plow into a parked car. Who is at fault? The software in the chip? The hardware in her body? How do we tell if this was a malfunction between Anna’s true intentions and the device’s AI or if this was Anna’s fault? Can her neural data be analyzed by lawyers to determine who is to blame? Does she have a right to personal privacy of her own thoughts? 


While this is a hypothetical situation, Anna’s is a dilemma that has the potential to occur, and one that we don’t have a current precedent with which to guide our response. As technology rapidly advances, to avoid problems in the future like this one and many others, ethical groundwork needs to be laid down. There is no doubt that BCIs have tremendous potential to improve people’s lives, but also, we need to tread carefully as we enter into the thoughts and intentions that make up people’s minds. 

What are Brain Computing Interfaces (BCIs)?

A brain computing interface is a direct communication between an external device, like an electrode or a computer chip, and the brain. Neurons are brain cells that communicate naturally to each other using electricity. BCIs decode those signals for the user’s intentions, like in the above example where Anna is able to think about moving her arm and the arm moves as intended. BCIs are rapidly developing as computer chip technology improves; however, they aren’t new technology. Research into BCIs began in the 1970s at UCLA [1] and the first neuroprosthetic, a device that uses the brain’s signaling to move accordingly, was approved in the mid-1990s.

 

Figure 1: different types of technologies used to record neural signals [2]

Currently, there are many obstacles to creating both invasive (requiring surgery) and non-invasive BCIs that can actually do what science fiction dreams of. Devices need to be as small as possible, as flexible as possible, and compatible with the natural diversity within individuals’ brains. This is already an insurmountable task with current AI and hardware technology since there are over 80 billion neurons and 100 trillion connections between them – which is more than the number of stars in the observable universe- in the human brain. It is extremely difficult to create a wireless device capable of handling this amount of data. However, the last few years have seen an explosion of work in every aspect of neurotechnology, from neuroscience, neurosurgery, algorithm-building, and micro-electronics. Meanwhile, computers have become fast enough to process that information and translate it into a useful command for a robotic arm or a computer cursor.

What can BCIs do? 

BCI technology can change people’s lives. BCIs are currently used for more than just amputees, but also rehabilitation after strokes, epilepsy prevention, speech prosthesis (see previous Neuwrite post), brain and sleep disorders, as well as tumor detection. In recent years, however, attention has been given to interfacing with the brain outside of therapeutic need. Today there are at least five companies working on BCIs, including Facebook and Elon Musk’s Neuralink. These companies have begun to look into using chips for anyone who wishes to have their own BCI. These interfaces would allow users to draw pictures, take photographs, write text messages- all with just thoughts. The Defense Advanced Research Projects Agency (DARPA) has also funded several groups, mostly in academia, to develop devices capable of sensing and stimulating the brain nearly instantly. 

Further into the future than current research is capable of is a day when a device can understand and transmit thoughts like a universal neural interface that can interact with everything in your home environment. While this technology is currently way off, science marches on. As Jack Gallant, professor of psychology at the University of California, Berkeley and a leading expert in cognitive neuroscience states, “There’s no fundamental physics reason that someday we’re not going to have a non-invasive brain-machine interface. It’s just a matter of time. And we have to manage that eventuality.”[2] 

As with all development of technologies, there is a good and a bad side. For example, in the early days of the internet, everyone could become a publisher as potentially influential as established news organizations, amplifying the voices of the marginalized. However, little thought was given to the possibility one day people could use the itnernet to spread disinfomration, harasses women, conduct terrorism, or commmit rascist acts.

As both academia and industry race forward to develop better BCI implants, the ethical and legal side has lagged behind. BCIs are just another example of technology that has developed faster than society can keep up with. While BCIs have incredible potential to improve the quality of life for many, there are ethical concerns yet to be addressed. Brain computing interfaces can give people with disabilities greater autonomy, but what are some potential consequences and how can we mitigate them?

How can we protect people’s privacy?

One of the main concerns of BCIs is privacy. Since these chips are implanted in the brain, they are reading valuable information. Currently, electrode technology does not allow whole-brain recording, but devices that can measure neurons in a brain could create privacy issues that make Facebook’s current privacy issues look trivial. Think of data security: “Whenever something is in a computer, it can be hacked—a BCI is by definition hackable,” said Marcello Ienca, a senior researcher at the Health Ethics & Policy Lab at the Swiss Federal Institute of Technology in Zurich [2]. “That can reveal very sensitive information from brain signals even if [the device] is unable to read [sophisticated] thoughts.” Then there are the legal questions: Can the cops make you wear one? What if they have a warrant to connect your brain to a computer? How about a judge? Your boss? How do you keep your Amazon Alexa from sending toothpaste ads every time you think about brushing your teeth?

Like Anna’s hypothetical situation we started with, as these interfaces progress, a host of ethical, legal, and social questions will arise. Can you delete your thoughts from the computer once the interface has picked them up? Further, Dr. Gallant from UCB states, “You have these latent desires that you may not have even thought of yet, we all have problems with racism or bad attitudes about other humans that we would rather not reveal because we don’t really believe them or we don’t think they should be spoken about. But they’re in there.” [2] It’s not hard to imagine hackers gaining access to someone’s thoughts and leveraging them for public humiliation, or blackmail.

Equity of technology, historical and present

Alongside privacy issues, it’s imperative to also consider the kind of accessibility gap that will emerge as this technology comes out. This technology will no doubt be expensive. Wealthy parents already pay to cognitively enhance their children through tutoring, test prep, and higher education, without actual hardware to do so. Thus as this technology emerges, we must find ways to combat the pay-to-play environment that normally comes with novel developments and address equity as technology advances.

In addition to accessibility, what kind of biases will we run into with the AI that governs the decoding of the neural signals? AI is not an unbiased system and is subject to algorithmic bias, a human-made bias that occurs when machines and models receive biased data and then make biased, unfair decisions as a result. Implicit biases in decoding neural signals can have devastating impacts. What if the AI is trained on a homogenous population and does not take in enough accountability of natural variation? What would it mean to have a brain that a brain computing interface does not work on? Instances of this happening are plentiful with examples like soap dispensers that can’t recognize darker skin like it can lighter skin [3], the infamous Google photos blunder when a Black couple was labeled by the app as ‘gorillas [4]’, and the false arrest and imprisonment of a man due to a false facial recognition match [5]. 

As neuroscience and neurotechnology rapidly advance, inside our artificial intelligence algorithms, we must encode fairness, responsibility, and ethics into our data acquisition process. We can learn from the consequences of inequitable technology like in the examples above and find ways to ensure that we have data representable of experiences, ideas, backgrounds, and identities. 

The intersection between humanity and technology

Finally, what does it mean for humanity to exist in a world with such technology? University of Washington neuroethicist Dr. Tim Brown asks important questions in his research with patients with prosthetics that reveal how we engage with technology and elucidate our murky boundaries between humans and technology. For example, Dr. Brown asks if patients’ mood, personality, thoughts, or behaviors have changed since receiving the prosthetic device if patients felt that there were times that their actions were not their own and if they feel a stigma associated with having a device [6]. Oftentimes, patients answer yes to all, begging a further question of what makes us, us? 

Furthermore, it is important to look at who is funding the technology and how this will impact our future. For instance, DARPA is one of the biggest funders of BCI in the world. Making it easy to fly a military drone via thoughts alone might not be a welcome development in areas of the world that have experienced US military drones flown by hand. BCIs are supposed to be for improving human life, making neuroscientific advances, and creating novel neurotechnology- why should that funding be distributed through the military? Another big funder of this research is Facebook, which has hired more than 100 neuroscientists and engineers for its efforts, yet has not communicated these efforts to the public.  What is being done now in the BCI field impacts us all; thus, should companies that can create technology that impacts society also be held accountable? 

Today many ethicists are thinking about these problems. A multi-council working group under the National Institutes of Health’s Brain Initiative, a tech-focused research effort started by the Obama Administration, has been formed to address these problems. The Royal Society, the independent scientific academy of the UK, released a 106-page paper [2] in 2019 on brain-computer interfaces, part of which addresses ethical issues. Influential scientific journals like Nature have chimed in as well. Creating effective legislation will require support not just from Congress, but rather from all of society.

Readers with continued interests in neuroethics should read the weekly blog at theneuroethicsblog.com hosted by Emory Neuroethics [7]

References

[1] “The Brief History of Brain Computer Interfaces.” Brain Vision UK, 30 Apr. 2014, http://www.brainvision.co.uk/blog/2014/04/the-brief-history-of-brain-computer-interfaces/.

[2] “The Brain-Computer Interface Is Coming, and We Are so Not Ready for It.” Bulletin of the Atomic Scientists, 15 Sept. 2020, https://thebulletin.org/2020/09/the-brain-computer-interface-is-coming-and-we-are-so-not-ready-for-it/.

[3] “Why Can’t This Soap Dispenser Identify Dark Skin?” Gizmodo, https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-179793177

[4] “Google Apologises for Photos App’s Racist Blunder.” BBC News, 1 July 2015. http://www.bbc.com, https://www.bbc.com/news/technology-33347866.

[5] Hill, Kashmir. “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match.” The New York Times, 29 Dec. 2020. NYTimes.com, https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html.

[6] “Between Humanity and Technology | Be Boundless.” University of Washington Boundless Campaign, https://www.washington.edu/boundless/neuroethics/. Accessed 3 Nov. 2021.

[7] The Neuroethics Blog. http://www.theneuroethicsblog.com/. Accessed 3 Nov. 2021.