0 to 60 mph in 1.9 seconds: The Tesla Roadster.

8

October

2020

The pinnacle of electrification of cars.
As the successor to the first production car of Tesla, which was the 2008 Roadster, the development of the new Tesla Roadster was announced by Tesla CEO Elon Musk in November 2017.

TeslaRoadster2

The fully electric vehicle is said to be released after the release of the renewed Model S, currently Tesla’s most famous model car. Tesla promises a 0-60 of 1.9 seconds with a top speed over 250 mph (400 km/h). The Roadster would be capable of such incredible performance figures due to its staggering 10,000 Nm of torque and all-wheel drive system. This would make the Tesla Roadster the fastest car in the world.

The Roadster would break all records for acceleration and performance compared to traditional super cars with combustion engines. With an expected range of 1,000 km, the range for electric vehicles would be greatly outperformed. Currently, this record is also held by Tesla, with the Tesla Model Y which has a range of 508 km. This is the most interesting point to me. Although the Tesla Roadster might look like an electric toy for rich people, in reality, I think the Tesla Roadster will achieve 2 things that are very important in our search for a sustainable future.

The 2 reasons:
1. Just like with their Model S, the Tesla Roadster will make electric vehicles more appealing. Before the introduction of the Model S, electric vehicles were mostly low performance cars with boring designs. The segment was mainly intended for early adopters: drivers with a strong interest in sustainability and wanting to compromise on performance and design, in return for a more eco-friendly footprint with regards to their driving. After the Model S took the market by storm, the image of electric vehicles was completely changed. No longer where electric cars associated with compromising performance and boring designs. Instead, Tesla made electric vehicles a reasonable choice in the executive segment. The Tesla Roadster is capable of doing the same. Outperforming “classic” super cars, the Roadster will increase the appeal of electric driving world wide.

Elon happy

2. The tesla Roadster will push electric vehicle technology further with record breaking acceleration, top speed and most importantly: range. Currently, electric vehicles are known for their acceleration. The electric drive train makes it possible for the cars to have full access to their potential power from the moment you hit the pedal. However, their topspeed and range are often limited, due to the battery size. Batteries are heavy and therefore companies have to find a balance between the required performance (speed, acceleration, range) and how heavy they want the car to be. After all, the heavier the car, the more the weight is influencing the desired performance. I think the Tesla Roadster will push other car manufacturers to further develop the electrification of cars. This will result in more widely available models with increased performance at a more consume friendly price.

0to100realquick2

Do you have some savings laying around and has this blog article made you interested in the Tesla Roadster?
Prices are still to be announced for the European market, but the base model is expected to cost 200,000 dollar in the US, but the first 1,000 production cars (announced as the Founder series) will be priced at 250,000 dollar in the US. Future customers can pre-order the Roadster with a base reservation of 43.000 euro and a founders-serie reservation of 215,000 euro (for the Netherlands). For more information, check out Tesla’s  website:  https://www.tesla.com/nl_NL/roadster?redirect=no

5/5 (5)

Please rate this

Hey Podcast Lover! Have You Heard Of Lex Fridman?

7

October

2020

As BIM-student, it is very likely that you are interested in topics like coding, Deep Learning, Artificial Intelligence, Machine Learning, human-robotic interaction, or Autonomous Vehicles. If by any chance you also enjoy listening to podcasts, you might be in luck:

I highly suggest you to check out the Lex Fridman Podcast.

LexFridman

Lex Fridman is an AI research scientist at the Massachusetts Institute of Technology, often better known as MIT. He works on developing deep learning approaches to human sensing, scene understanding, and human-AI interaction. He is particularly interested in applying these technologies in the field of Autonomous Driving.

LexFridmanTeaching

If you know the Joe Rogan Experience, you likely are already familiar with Lex. Having worked for both Google and Tesla, Lex Fridman understands the business application of digital technologies. He uses his podcast to share this knowledge with his audience and discusses his fascination with a variety of interesting guests. This can be particularly interesting for us as Business Information Management students, as we also form the future bridge between business ventures and technological innovation. The podcast discusses similar topics like we get taught in class, sometimes going more in depth, with international research experts in those particular fields.

If you enjoy podcasts, these are some examples of Lex Fridman Podcast episodes that I highly recommend you to give a listen as a BIM-student:
RecommendedEpisodes

  • Episode #31 with George Hotz: Comma.ai, OpenPilot, Autonomous Vehicles.
    Famous security hacker. First to hack the iPhone. First to hack the PlayStation 3. Started Comma.ai to create his own vehicle automation machine learning application. Wants to offer a $1000 automotive driving application, which drivers can use on their phone.

 

  • Episode #49 with Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot.
    Elon Musk. Tech entrepreneur and founder of companies like Tesla, SpaceX, PayPal, Neuralink, OpenAI, and The Boring Company.

 

  • Episode #114 with Russ Tedrake: Underactuated Robotics.
    Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT.

 

  • Episode #120 with François Chollet: Measures of Intelligence.
    French Software Engineer and researcher in Artificial Intelligence, who works for Google. Author of Keras – keras.io – a leading deep learning framework for Python, used by organisations such as CERN, Microsoft Research, NASA, Netflix, Yelp, Uber, and Google.

These were just several examples of episodes that I enjoyed myself.

The benefit of a podcast is that you can listen it basically anywhere, and can stop listening at any time. If you are not familiar with podcasts yet or with the listening experience they offer, maybe the Lex Fridman Podcast could be your first step into this experience.

You can find the episodes of the Lex Fridman Podcast here: https://lexfridman.com/podcast/

Or check out Lex Fridman’s Youtube channel here: https://www.youtube.com/user/lexfridman

The above sources have been used as sources for this post.

5/5 (7)

Please rate this

BIM, Meet Gertrude!

6

October

2020

Gertrude enjoying a well deserved drink during her performance. 

In August 2020, famous tech entrepreneur Elon Musk revealed his latest technological project: a pig called Gertrude. On first sight, Gertrude looks like an ordinary Pig. She seems healthy, curious, and eager to taste some delicious snacks. When looking at her, it is hard to imagine how she managed to get one of the world’s most radical and well known tech entrepreneurs so excited. Gertrude just seems normal.

This is exactly the point!

ElonMuskGotcha

Elon Musk “Gotcha”

Gertrude is no ordinary pig. She has been surgically implanted with a brain-monitoring chip, Link V0.9, created by one of Elon Musk’s latest start-ups named Neuralink.

Neuralink was founded in 2016, by Elon Musk and several neuroscientists. The short term goal of the company is to create devices to treat serious brain diseases and overcome damaged nervous systems. Our brain is made up of 86 billion neurons: nerve cells which send and receive information through electrical signals. According to Neuralink, your brain is like electric wiring. Rather than having neurons send electrical signals, these signals could be send and received by a wireless Neuralink chip.

To simplify: Link is a Fitbit in your skull with tiny wires

The presentation in August was intended to display that the current version of the Link chip works and has no visible side-effects for its user. The user, in this case Gertrude, behaves and acts like she would without it. The chip is designed to be planted directly into the brain by a surgical robot. Getting a Link would be a same day surgery which could take less than an hour. This creates opportunities for Neuralink to go to the next stage: the first human implantation. Elon Musk expressed that the company is preparing for this step, which will take place after further safety testing and receiving the required approvals.

The long term goal of the Neuralink is even more ambitious: human enhancement through merging the human brain with AI. The system could help people store memories, or download their mind into robotic bodies. An almost science-fictional idea, fuelled by Elon Musk’s fear of Artificial Intelligence (AI). Already in 2014, Musk called AI “the biggest existential threat to humanity”. He fears, that with the current development rate, AI will soon reach the singularity: the point where AI has reached intelligence levels substantially greater than that of the human brain and technological growth has become uncontrollable and irreversible, causing unforeseeable effects to human civilization. Hollywood has given us examples of this with The Matrix and Terminator. With the strategy of “if you cannot beat them, join them”, Elon Musk sees the innovation done by Neuralink as an answer to this (hypothetical) catastrophical point in time. By allowing human brains to merge with AI, Elon Musk wants to vastly increase the capabilities of humankind and prevent human extinction.

Singularity
Man versus Machine

So, will we all soon have Link like chips in our brains while we await the AI-apocalypse?

Probably not. Currently, the Link V0.9 only covers data collected from a small number of neurons in a coin size part of the cortex. With regards to Gertrude, Neuralink’s pig whom we met earlier in this article, this means being able to wirelessly monitor her brain activity in a part of the brain linked to the nerves in her snout. When Gertrude’s snout is touched, the Neuralink system can registers the neural spikes produced by the neurons firing electronical signals. However, in contrast: major human functions typically involve millions of neurons from different parts of the brain. To make the device capable of helping patients with brain diseases or damaged nervous system, it will need to become capable of collecting larger quantities of data from multiple different areas in the brain.

On top of that, brain research has not yet achieved a complete understanding of the human brain. There are many functions and connections that are not yet understood. It appears that the ambitions of both Elon Musk and Neuralink are ahead of current scientific understanding.

So, what next?

Neuralink has received a Breakthrough Device Designation from the US Food and Drug Administration (FDA), the organisation that regulates the quality of medical products. This means Neuralink has the opportunity to interact with FDA’s experts during the premarket development phase and opens the opportunity towards human testing. The first clinical trials will be done on a small group of patients with severe spinal cord injuries, to see if they can regain motor functions through thoughts alone. For now a medical goal with potentially life changing outcomes, while we wait for science to catch up with Elon Musk’s ambitions.

 Neuralink-Logo

Thank you for reading. Did this article spark your interest?
For more information, I recommend you to check out Neuralink’s website https://neuralink.com/

Curious how Gertrude is doing?
Neuralink often posts updates on their Instagram page https://www.instagram.com/neura.link/?hl=en

Want to read more BIM-articles like this?
Check out relating articles created by other BIM-students in 2020:

Sources used for this article:

4.88/5 (8)

Please rate this

Artificial Intelligence: How I Learned to Stop Worrying and Love Skynet

12

September

2017

4.5/5 (4)

What is the first thing we think of when we hear the phrase ‘Artificial Intelligence’ (AI)? Mechanical monsters bent on exterminating humanity, as the film industry teaches us with sci-fi movies such as Terminator and Skynet?

During the lecture on the eleventh of September, 2017, the professor asked whom had ever made use of AI. Siri was held up as an example of an AI, and that is correct – but the many, many hands that weren’t raised, were probably incorrect; does anyone not use Google’s search engine on a daily base, for instance? Artificial Intelligence is everywhere, from the spam filters on our emails to the cars we drive in daily.

AI, then, is not hardware, but software; a brain of software, using hardware as its medium, capable of connecting data through mathematical reasoning to reach new insights and conclusions.

So far, these are by and large background processes, largely invisible to the uncritical eye. This is called ‘weak AI’, or ‘Artificial Narrow Intelligence’ (ANI). Siri is an example of this, as are the various game (chess, Go) champions, the many Google products – its translator, its search engine, the spam filters on its email provider – the whole process of applications or websites recommending products, videos, friends, or songs, self-driving cars, and ever so on. As should be clear, this kind of AI is not weak in that it can barely achieve anything – one would hardly call self-driving cars simple products, child’s play to create – but it is narrow, in that this AI can only excel at a very narrowly defined task. Hence, the term Artificial Narrow Intelligence; ANI.

ANI is practically everywhere these days. But if there is a narrow AI, then surely there is a more general AI as well? Indeed; Artificial General Intelligence, AGI, also known as strong AI. A human-like AI, as smart as the average human on all tasks – not just one narrowly defined one – capable of preforming any intellectual task a human can. This AI does not exist yet – unless, of course, you feel that now would be an excellent time to expose yourself as the Terminator, dear reader?

There are two problems that we still have to tackle in order to create AGI. One concerns computational power. In this, according to the TOP500, a Chinese supercomputer, the Sunway TaihuLight, currently takes the lead. It overtook the Chinese Tianhe-2 in June of 2016 (TOP500 compiles its list every June and every November), and as of June 2017, it still claims the number one spot. It can preform 93 quadrillion floating point operations per second (petaflops), which is about thrice as much as the Tianhe-2 (33.86 petaflops). Is it more than the human brain? A whole variety of scientists rank the human brain as ranging anywhere from 10^13.5 to 10^11, but then there are also scientists that rank the human brain as an order of magnitude higher, or outright dismiss the comparison. For further reading, https://www.quora.com/How-many-FLOPS-is-the-human-brain – including the comments – might be a nice place to start, but Google is full of many wildly differing claims.

It hardly matters, for now, though. Even if a supercomputer exists that is better than the human brain, it would only be better in the amount of floating point calculations it could preform – but at what cost? The human brain requires 20 watt – the energy of a light bulb – and 115 square centimetres, which more or less fits in your hand. Green500, which ranks TOP500’s supercomputers based on their energy efficiency, gives the Sunway TaihuLight fourth place; it requires one watt for every 6,051.30 floating point operations. Times 20, that makes it able to preform 121,026 floating point operations on the same wattage the human brain runs on. That is quite a bit short from 93 quadrillion. Further, whereas the human brain fits in a handpalm, the Sunway TaihuLight fits comfortably in a room the size of a thousand square metres. Not quite the hardware the Terminator ran around with.

The second problem with reaching AGI, aside from computational power, is intelligence. There are roughly three methods that are currently attempted. We could simply copy our brain. Literally, with a 3D printer, or slightly less literally, by setting up a network of neurons that would randomly fire and not achieve much at all in the beginning. But the moment it does achieve something, such as correctly guessing that a certain picture is a muffin and not a puppy, we can reinforce this path, making it likelier to use this path in the future and therefore making it likelier to be correct in the future. With this, the one millimetre long brain of a flatworm, consisting of 302 neurons, was emulated and put into a LEGO body (because we’re all children at heart) in 2014. For comparison, the human brain consists of 100 billion neurons – but even so, as our progress increases exponentially, some have eyeballed this method to achieve success somewhere around 2030 to 2050.

A second method is to copy evolution, instead of copying the brain. The large downside of this is that evolution had billions of years to play around with us. The upside is that we are not evolution; we are not driven by random chance, and we have the actual goal of creating intelligence. Evolution might well select against intelligence, because a more intelligent brain requires more energy, that might be better used for other abilities, such as warmth. Fortunately, unlike evolution, we can directly give energy, which might be highly inefficient, but we can improve that over time.

The third method is to let our budding AI figure it out for us, by researching AI and changing its own code and architecture based on its findings. Unsupervised machine learning – but of course, we can shut it off before it quickly becomes more capable at anything the human brain can do and becomes a superintelligence, right?

Right?

You just know the answer is ‘no’ when someone poses the question.

Next to Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), there is a third variant; Artificial Superintelligence (ASI). Most agree that a self-improving AI is the likeliest to go from ANI to AGI – and if it isn’t, there is still no reason to assume that no self-improving AI will ever come into existence. If a self-improving AI reaches general intelligence – that is, if it becomes just as capable as humans are…

Then it is inherently more capable already. It is self-improving, after all, constantly applying upgrades and fixes to improve itself even more. It has microprocessors, some of which today run at 3.6 GHz, whereas our own neurons run at a measly 200 Hz. It communicates at the speed of light, 299,792,458 metres per second, whereas our brain does so at a speed of 120 metres per second. Its physical size is scalable, allowing it to increase its memory (RAM and HDD; short-term and long-term memory), whereas our brain is confined by our skull – but we couldn’t really expand our brain anyway, because our 120 metres per second would be too slow to keep up.

There are many more factors to consider. Humans cannot work literally every second of their lives, as humans tire out and lose their focus. Humans are prone to making mistakes, doubly so when tired or unfocused; to err is human, after all. Humans cannot easily work in a variety of circumstances, such as radioactive places, deserts, or the arctic.

Then there is humanity’s biggest advantage, that has enabled humanity to take the world by storm and reshape it into our image, with towering blocks of concrete and signs of human habitation everywhere. Humans developed language, and lived in communities – tribes – allowing not individual humans, but all of humanity, to learn. Fathers would remember the lessons of their own fathers, and teach them to their peers, their sons, and his peers, and so the collective knowledge of humanity increased gradually. Writing, and alphabets, were developed to better store this knowledge. Stored in libraries, written by hand, until the printing press was invented – it is no wonder that the printing press spawned a wave of revolutions and ideas, now that every single citizen could read books and form their own opinions. Universities taught all that humanity knew, and with every generation, this collective knowledge grew exponentially. Now we have the internet and computers, allowing for unprecedented communication and storage, and if you compare the last 50 years to the 50 years before, and to the 50 years before that, and so on, it becomes quite apparent how rapidly our knowledge has grown recently (for more reading, search for Ray Kurzweil’s ‘Law of Accelerating Return’).

But we are humans, individuals still. We are not a worldwide network of interconnected computers. We have an advantage that allows us to utterly dominate the entirety of Earth – and AIs will have this advantage ten times over.

This begs the question: What is AGI supposed to be anyway? Because it is a human-centric measurement, but humans are not special; the average IQ of a human is not a universal constant, it is just another point on a graph of the average IQ of species. For an AI, it is just the next number it has to pass, after passing worms and monkeys and ravens and more. And it is a point exceedingly close to both a mentally challenged individual and Einstein, despite the vast difference we would perceive between the two.

There are three broad views that deal with the transformation from AGI to ASI. The arguments for and against these views are more philosophic than what is presented in the above paragraphs, which is why I will include an article that expands upon each of the three views for further reading, should you desire to do so.

Proponents of the soft take-off view say that the concept of self-improvement already exists; in large companies. Intel has tens of thousands of people and millions of CPUs, all of which are used to become even better at developing CPUs; it is highly specialised in this task. So, too, with an AGI; this AI will participate in our economy (which knows exponential growth as well) because it can specialise itself in something to earn money with. Money which will be used to buy the necessary means for self-improvement. This specialisation – letting others develop things for you, that you can then buy with the money you earned from your own specialised skillset – is historically speaking a great advantage of humanity, and will be more advantageous than doing everything on its own for this AI as well. The AI would have to compete with almost all of humanity and all of humanity’s resources, to be more efficient in every single thing than a any specialised human is – and an AGI, a human-level AI, is not so vastly more efficient than all of humanity together.

There is a scientific foundation for this, such as what is written in the book ‘Artificial General Intelligence 2008: Proceedings of the First AGI Conference’ (very shortly; approximately 1% of the citizens of the USA are scientists or engineers, approximately 1% of that amount devotes itself to cognitive science, we can estimate cognitive science to improve its models at a rate equal to Moore’s Law, from this we can estimate humanity’s ability at intelligence improvement, and so we can calculate that, for now, it makes sense for an AGI to work and interact with us to improve itself), but if you simply wish to read one of the many articles that deal with the subject: http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html

A second view is that of the semi-hard take-off scenario, comparable to what is often seen in movies; an AI connects to the internet and rapidly reads and learns everything present there. Perhaps you have once heard of the ‘paperclip maximiser’ scenario ( https://nickbostrom.com/ethics/ai.html ); an AI is made with an explicit purpose (produce paperclips, in this case), and as it tries to do this as efficiently as possible, it transforms the entire world – and beyond – into a paperclip factory, including all humans. That makes sense; the AI’s only goal was to produce paperclips, after all, and why would the AI perceive there be any moral constraints? The semi-hard take-off scenario can perhaps best be seen as a hard take-off scenario with a temporary transition phase in front of it – but during this phase, it may well be unstoppable already: http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/

The final view is that of the hard take-off, wherein the AI becomes superintelligent in mere minutes.
One of the many articles for your perusal: http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/

There is a colossal danger with both the semi-hard and the hard take-off scenarios, for in all likeliness, we will only get one chance at creating a superintelligence. Whether there is a transition period or not, it will be exceedingly hard – practically impossible? – to abort it. But the paperclip maximiser, as silly as it may sound, indicates that it is crucially important for such a superintelligence to ‘be like us’ and to ‘like us’. And that is a task far harder than one might think (one that Elon Musk is attempting as well with OpenAI, an open source attempt to create a safe superintelligence: https://openai.com/ ).

If a monkey, just slightly less intelligent than us, cannot even comprehend what we do, lacks even the most basic of concepts that we all share, then how unfathomable will a superintelligence be? It won’t just be ‘slightly more intelligent’, a gap comparable to that between us and elephants, ravens, whales, dolphins, or monkeys. It will be like a god, and perhaps even without the ‘like a’. It could give us immortality, and make Earth like paradise, and fulfill our every desire. That is not to say we are doomed to obsolesce, like ants writhing invisibly around the feet of humans carelessly stamping on them left and right – but the ways we might potentially keep up with such a superintelligence sound less than desirable at first glance: https://waitbutwhy.com/2017/04/neuralink.html

Of course, it all sounds so fantastical and absurd, it is probably all just nonsense… But what if it isn’t? Can we take that risk? If it isn’t nonsense, how crucial, then, is it that we develop a proper superintelligence? One that aligns with us, our morals and values, even as they evolve. One that knows what we want, even if we cannot articulate it; can you define what ‘happiness’ is, for yourself?

Perhaps it is navel-gazing nonsense. Or perhaps we are actually on the brink of creating a hopefully-benevolent god.

As always, opinions are divided. What is yours?

Main sources used:

Artificial Intelligence and Its Implications for Future Suffering

The AI Revolution: The Road to Superintelligence


http://www.aaai.org/home.html

Please rate this