Neuralink. Too futuristic?

7

October

2020

5/5 (1)

Advancement in technologies has stemmed innovation in variety of fields. Entrepreneurs aim to invent new, better devices, software and solutions. And then there are some, that take it even further.

An example of this is Elon Musk – a businessman, inventor and engineer known by many as the founder of Tesla, or the aerospace company Space X. Notorious for his futuristic ideas, and his ability to (kind of) achieve them. Musk’s latest invention titled Neuralink is attempting to break into the field of neuroscience. The team, composed of neuroscientists, engineers and veterinarians, aims to develop a device that can directly interface with our brains.

Human brain is composed of 86 million neurons. Each of these neurons communicate with one another by sending electrical signals. Different parts of our brains are responsible for different types of activities; they carry information about things that we see, feel, touch or think. Neuralink aims to interact with neurons, by connecting electrodes into your brain that can detect the transmitted signals, and if needed stimulate them additionally with electrical current. It is installed, by cutting out a piece of skull, and replacing it with a coin-shaped device. This device then connects tiny wires to the surface of the brain, that can send electrical signals to make neurons fire signals, or read emitted signals. To make it even more sci-fi, the company has also developed a robot-surgeon that is able to perform the surgery and insert the device in several hours.

According to the company, the possibilities are endless. However, the first milestone is to allow people to control a robotic prosthetic limb or transmit the brain data to a digital device. Curing Parkinson’s and allowing relief from mental illnesses were also on the list. Recently, the device was presented in a live presentation with several pigs having the device implanted, and their brain activity displayed on a screen.

Our brains are still one of the fields in medicine that have not yet been fully discovered and understood. Even if Neuralink fails, I believe that such efforts are encouraged, as they allow to learn more about how human brain works, and gain insight into our conscience. It is likely that in the future, further advancements in technology will allow similar devices to emerge.

Let us hope that they do not fall into the hands of advertisers.

Sources/Further information
Presentation: Neuralink Progress Update, Summer 2020
Site: Neuralink.com

Please rate this

The problem of Interpretability in machine learning models

29

September

2020

5/5 (2)

In the recent years, machine learning technologies are increasing in popularity among variety of different fields. The vast amounts of data allow data scientists to play around and try to answer questions that are impossible to accurately calculate without such technologies. Some of these questions can be highly important and sensitive – for example, deciding whether a loan application will default, or whether an applicant should be considered for the job. There are variety of models that can predict such outcomes, however, is their prediction generalizable and free of biases?

A classical example of this is a neural network classification model, that was built to identify whether in a given picture there is a husky or a wolf. Surprisingly, the model was performing with a high accuracy and predicting the correct animals. However, researchers quickly realized, that something was wrong with it. The model, was basing its decisions purely on one aspect – whether the picture contains snow. Well, that is quite a logical explanation, as pictures of wolves do indeed have usually snow in them, however, given the explanation for this reasoning, I doubt that anyone would want to apply this biased model in a real life situation.

In making predictions, there is often a trade-off between interpretability and model accuracy. When a linear model predicts an outcome, the prediction is just input variables multiplied by different weights – these can be easily explained. However, when using a more advanced model, such as gradient boosting classifier or neural networks, this interpretation becomes complicated.
In recent years, however, there has been some effort to tackle this; one of them is the so called Local Interpretable Model-agnostic Explanations (LIME). Without going too deep in the theoretical details (I will provide some links for further information) LIME is essentially a technique, that helps better explain the reasoning behing any model‘s particular prediction, locally – meaning that it only allows to analyze a small portion of the network/model.

In practice, LIME allows to understand predicitons, by visualizing the features of a given observation that had the most impact in making that particular prediction. Returning back to our wolf/husky identifier – in a given picture, LIME will highlight the snow that that there either is, or is not, hence explaining the behaviour of that particular model.

The importance of this, is that it allows data scientists to better evaluate their models and their applicability. When a model will predict a loan applicant as ‚likely to default‘, and a bank makes the decision to deny him that loan – managers will be able to explain to the person what are the exact reasons. In a critical situations, we cannot simply use black boxes that give us answers to our simplified questions – we also need to know the reasoning.

Sources/Further information:
Video: Interpretable Machine Learning Using LIME Framework – Kasia Kulma (PhD), Data Scientist, Aviva
Article: Guide to Interpretable Machine Learning

Please rate this