My CLI frustrations and how ChatGPT solved them.

4

October

2024

No ratings yet.

About a year ago, I started delving into the world of self-hosting services, things such as game servers, cloud storage and Netflix alternatives. The idea was to not be as dependent on SaaS providers, as I had a spare laptop lying around anyway, why not give it a go?  So the first thing I did was install Proxmox, a hypervisor to separate out the different services I was planning to set up.

This is where my struggles started, as you might be aware, most servers run on a Linux machine without a GUI. I soon discovered that Proxmox also primarily uses a command line interface. For those not aware, a CLI is where you write code to make your computer do anything at all, an example would be “cd usr/home” this would take you to that folder. 

While I got a grasp on the basics relatively quickly, the complexity increased just as fast for the things I wanted to achieve. This is where ChatGPT came to save the day, with 4o it could actively search the internet and scan through documentation to specifically create the command I required. Instead of needing to write in computer language, I could explain to ChatGPT what I was trying to do, and it would generate the exact commands I needed.

myservice.service – My Custom Service
Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 12:34:56 UTC; 5s ago
Process: 1234 ExecStart=/usr/bin/myservice (code=exited, status=1/FAILURE)

It helped with reading these kinds of error codes as well, anyone familiar with these kinds of messages knows that they are completely unreadable if you don’t know all the documentation.

While you still need to be relatively tech-savvy to set up your services, I believe that with the increase in development of gen AI it will only get easier. 

You may wonder what the advantages are of going through all these hassles instead of simply using Netflix, Google Drive, and One Drive. As we all know, a couple of tech giants have monopolized many of the daily services we use. They collect our data in massive quantities, creating many privacy concerns, furthermore they suppress innovation within the field. Hosting your services makes sure that you minimize the amount of data you put on the internet.

Furthermore, many SMEs use several services for which they pay massive licensing and hosting fees each year. If these new tools help SME’s set up their own servers, they are less dependent on third-party prices and can save costs.

All in all, I believe that the support LLMs provide to be able to set up your own services democratizes the internet and reduces the power of the tech monopolies, this should be celebrated by anyone who supports free markets.

Sources:

https://www.proxmox.com/en

https://pixabay.com/vectors/command-shell-terminal-dos-input-97893

Please rate this

Toxic Code: How Poisoning Attacks Are Undermining AI Systems

16

September

2024

5/5 (3)

In the rapidly evolving world of artificial intelligence (AI), not all advancements are aimed at making systems smarter. Some are designed to make them fail. Enter poisoning attacks, a new form of sabotage that can turn intelligent systems against themselves. But the question is, how does it work and should we really care about it?

What Are Poisoning Attacks?

Imagine teaching a student a mix of good and false information. If you sprinkle enough false information in the lessons, even the brightest student will come to some incorrect conclusions. In AI, poisoning attacks work similarly: the data used to train the AI model is corrupted by an attacker with the intent to make errors once the AI is functioning (Shafahi et al., 2018). For example, consider a self-driving car that is trained on images of road signs. If an attacker can poison the system with even a small number of false images that label a “stop sign” as unreadable, the car could misunderstand traffic rules and be dangerous not only to the people in the car, but to everyone on the street (Wang et al., 2023).

(Dash & Bosch AIShield, 2023)

Real-World Impact: Why Should You Care?

Poisoning attacks aren’t just a theoretical risk, they are a real threat in AI systems today. Take for example GitHub’s CoPilot, an AI run code completion system that helps developers autocomplete their code in real time (GitHub, 2023). In this case, an attacker would poison the CoPilot and steer it towards generating vulnerable code that has a number of security defects (Improta, 2024). While this seems like a problem that only impacts coders, this can result in problems for other people as well. Vulnerable code can result in everyday people losing their private data, such as the recent Social Security Number breach in the USA (Chin, 2024). A relevant example on how poisoning attacks can affect your everyday life is through social media. Algorithms could be altered in order to determine what goes viral or to spread misinformation by pushing fake news to a large number of users. This is a scary thought as news is being filtered more often by AI.

Defending Against Poisoning: A Losing Battle?

Defenses against poisoning attacks are evolving everyday, although attackers often seem to be one step ahead. Additionally, anomaly detection systems are being integrated into AI systems, but the question is, how much of the data needs to be infected in order to not be considered an anomaly anymore (Huang et al., 2022)? As Alexey Kurakin et al. (2016) highlight in “Adversarial Machine Learning at Scale”, vulnerabilities are being exploited by attackers in real time, creating a race between “poison” and “antidote”. However, the poison is being treated with continuous advancements in AI security and collaboration among researchers. Defenses are growing smarter, aiming to outpace attackers, making the future look promising for AI based systems.

Conclusion: Can We Trust AI?

AI holds a great deal of potential but is just as good as the data we feed it. The reality is that this is just the beginning of a fight to secure data and by extension, AI itself. The future of technology is being shaped by these poisoning attacks so stay tuned and keep your eyes out for misinformation. And don’t forget, data is the driving force behind everything! 

References

Alexey Kurakin, Goodfellow, I. J., & Samy Bengio. (2016, November 4). Adversarial Machine Learning at Scale. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.1611.01236

ChatGPT. (2024, September 16). A Hacker Injecting Poison into an AI Brain Using a Syringe, in a Panoramic Style

Chin, K. (2024, February 20). Biggest Data Breaches in US History. UpGuard. https://www.upguard.com/blog/biggest-data-breaches-us

Dash, M., & Bosch AIShield. (2023, May 9). Understanding Types of AI Attacks. AI Infrastructure Alliance. https://ai-infrastructure.org/understanding-types-of-ai-attacks/

GitHub. (2023). GitHub Copilot · Your AI pair programmer. GitHub. https://github.com/features/copilot

Huang, S., Bai, Y., Wang, Z., & Liu, P. (2022, March 1). Defending against Poisoning Attack in Federated Learning Using Isolated Forest. IEEE Xplore. https://doi.org/10.1109/ICCCR54399.2022.9790094

Improta, C. (2024). Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code. https://arxiv.org/pdf/2403.06675

Shafahi, A., Huang, W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., & Goldstein, T. (2018). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/pdf/1804.00792

Wang, S., Li, Q., Cui, Z., Hou, J., & Huang, C. (2023). Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Systems with Applications, 227, 120295–120295. https://doi.org/10.1016/j.eswa.2023.120295

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

Using Midjourney to recreate lost memories

18

October

2023

No ratings yet.

Midjourney is a generative AI program which can convert simple natural language prompts into high-quality images. If you have an idea which you can pen (or rather type) down for the program, it will visualize it for you.

Right around the time the hype for this newly launched AI was building up I was finishing my exchange semester in Madrid, and like any other exchange student I made some stupid mistakes. My first mistake was to drop my phone from the fourth-floor balcony during New Year’s Eve. My second mistake was not making sure all my phone pictures are backed up on the cloud when I went to the repair store the next morning, still half dizzy.  It was merely coincidental that during the two days my phone was kept at the store, I was bombarded with AI generated pictures on photography communities online. Upon further research, I found out that these were being created by inputting prompts into Midjourney. All you needed was a Discord account.

Thus, when I received my newly formatted phone back only to realize that all my pictures from the past six months of exchange have vanished, I decided to give Midjourney a try. Crestfallen that I had lost so many memories, I wanted these images to be as realistic as possible. The free version gives you 25 prompt tries, so I researched on the science behind these text prompts to make the most out of those tries. You enter “/imagine” into the text field and voila, you can describe your image.

Midjourney prompt text field

Using a bit of trial and error and building upon what I read on the Internet, here are some general ideas which helped me recreate the images of my choice:

  • The more detailed the description, the better your image results usually are.
  • Make use of commas, they act as soft breaks to your image description.
  • Adding weights to your words, such as 0.5 or mentioning the axis ratio such as “–ar 16:9” can enhance the results.

Example of a typical Midjourney prompt

You can find the results of my journey with Midjourney below, which I believe are quite impressive. The only aspect where Midjourney struggled back when I made these pictures was recreating realistic humanistic features, this being continuously improved and functioning even better now. Whether AI generated images pose a threat to the professionals in the field is a matter of the consumer’s demand, and I have no opinions on that because the creative industry seems like an irrational vortex to me. However, I can definitely see photographers, film studios, and creatives making use of such programs for conceptualization, innovation and maximizing their creative potential.

What do you think?

AI recreation of my lost 2022 camera roll

Please rate this

AI-Powered Learning: My Adventure with TutorAI

16

October

2023

No ratings yet.

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Please rate this

Weapons of mass destruction – why Uncle Sam wants you.

14

October

2023

No ratings yet.

The Second World War was the cradle for national and geopolitical informational wars, with both sides firing rapid rounds of propaganda at each other. Because of the lack of connectivity (internet), simple pamphlets had the power to plant theories in entire civilizations. In today’s digital age, where everything and everyone is connected, the influence of artificial intelligence on political propaganda cannot be underestimated. This raises concern as, unlike in the Second World War, the informational wars being fought today extend themselves to national politics in almost every first-world country.

Let us take a look at the world’s most popular political battlefield; the US elections; in 2016, a bunch of tweets containing false claims led to a shooting in a pizza shop (NOS, 2016), these tweets had no research backing the information they were transmitting, but fired at the right audience they had significant power. Individuals have immediate access to (mis)information, this is a major opportunity for political powers wanting to gain support by polarising their battlefield.

Probably nothing that I have said to this point is new to you, so shouldn’t you just stop reading this blog and switch to social media to give your dopamine levels a boost? If you were to do that, misinformation would come your way six times faster than truthful information, and you contribute to this lovely statistic (Langin, 2018). This is exactly the essence of the matter, as it is estimated that by 2026, 90% of social media will be AI-generated (Facing reality?, 2022). Combine the presence of AI in social media with the power of fake news, bundle these in propaganda, and add to that a grim conflict like the ones taking place in East Europe or the Middle East right now, and you have got yourself the modern-day weapon of mass destruction, congratulations! But of course, you have got no business in all this so why bother to interfere, well, there is a big chance that you will share misinformation yourself when transmitting information online (Fake news shared on social media U.S. | Statista, 2023). Whether you want it or not, Uncle Sam already has you, and you will be part of the problem.

Artificial intelligence is about to play a significant role in geopolitics and in times of war the power of artificial intelligence is even greater, luckily full potential of these powers hasn’t been reached yet, but it is inevitable that this will happen soon. Therefore, it is essential that we open the discussion not about preventing the use of artificial intelligence in creating conflict and polarising civilisations, but about the use of artificial intelligence to repair the damages it does; to counterattack the false information it is able to generate, to solve conflicts it helps create, and to unite groups of people it divides initially. What is the best way for us to not be part of the problem but part of the solution?

References

Facing reality?: Law Enforcement and the Challenge of Deepfakes : an Observatory Report from the Europol Innovation Lab. (2022).

Fake news shared on social media U.S. | Statista. (2023, 21 maart). Statista. https://www.statista.com/statistics/657111/fake-news-sharing-online/

Langin, K. (2018). Fake news spreads faster than true news on Twitter—thanks to people, not bots. Science. https://doi.org/10.1126/science.aat5350

NOS. (2016, 5 december). Nepnieuws leidt tot schietpartij in restaurant VS. NOS. https://nos.nl/artikel/2146586-nepnieuws-leidt-tot-schietpartij-in-restaurant-vs

Please rate this

Can AI help me get a job?

10

October

2023

No ratings yet.

I am searching for a new job. A job that I can combine with my studies and which can provide me with enough to allow my shoe-box-sized apartment. But to get there, one often needs to write long motivational letters to various organisations and go through various potential job postings. However, the new age offers many opportunities to write motivational letters automatically and adapt to each and every company.  

In this search, I tested three separate AI-powered websites, ChatGPT, Kickresume, LazyApply and Rezi.

Kickresume, LazyApply and Rezi all provide a free trial of the algorithm that formulates extensive motivation letters. What is more, they all provide an easy User experience. The latter three websites also provide the user with prompts, like ‘’paste the job description’’ and ‘’paste your CV’’ which can provide a great deal of intertwined attention to one’s abilities and the required skills. The given prompts can also be skipped or occasionally modified if deemed to be unnecessary. Therefore, a complementary document can be readily made if one has a CV.

Regarding more mainstream and wide-use AI-language models like ChatGPT, one needs to insert a significant number of self-created prompts to create even a slightly similar quality document. It can be a helpful tool for people with more background knowledge of HR. For others, it can also complicate the creative process even more since Farrohina et al. (2023) find that AI language tools, if not used right, can significantly hinder one’s productivity and creativity.

Baert and Verhaest (2019) also emphasize that overqualification in the application process does not lower one’s chances of receiving the job and even increases the chances of employment for temporary jobs. Therefore, additional effort can not be of harm.  

Overall, all platforms provide similar-level content and are an excellent tool to create a personalized motivation letter. Sadly, the lack of layout options persists but can be easily tackled, by the use of other platforms.

Last but not least, the AI language models are built upon similar documents; therefore, the originality can only reach as far. Hence, the generated letters can come out to be too generic if applied to highly sought-after positions. Therefore, as helpful as these websites can be, they cannot replace well-thought-out and personal material.

Baert, S., & Verhaest, D. (2019). Unemployment or overeducation: which is a worse signal to employers? De Economist167(1), 1-21.

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1-15.

Please rate this

Could AI be contributing to the disappearance of language diversity?

18

September

2023

No ratings yet.

Almost in no time, AI-powered large language models (LLMs) such as ChatGPT, Bing AI Chat, Google Bard AI, etc., have gained popularity among the mainstream part of society. However, I have noticed increased social media attention, specifically among Latvian language speakers, about the lack of applicability and, oftentimes, even comedic outputs these language models create.

I test this observation by typing ‘write a poem’ in multiple languages in the dialogue interface of ChatGPT. English, Russian, French, Arabic, Hindi, Dutch, Latvian, Estonian and Lithuanian. Interestingly, although ChatGPT can produce, to some extent, coherent text in all prior languages, the latter three, i.e., the Baltic countries, excel with incoherent meanings and even grammar and style inconsistencies. Bang et al. (2023) argue that these are low-resource languages, i.e., languages with relatively few speakers. Not surprisingly, Latvian is spoken by 1,5 million native inhabitants (Latvian Presidency, 2023), and the AI model has not received the necessary data input to produce grammatically or style-wise coherent sentences (see picture).

So, how can this be an issue?

In Baltic countries, approximately 95% of individuals speak at least two languages (Latvian Presidency, 2015). The second language most often is Russian or English, i.e., high-resource languages.

This is a worry, as many native speakers might instead stick to high-resource languages while browsing or creating content. This, in return, reciprocates the poor usability of low-resource languages and exacerbates language polarization. The low-resource languages are, therefore, at risk unless new measures are implemented to better the LLM training, e.g., training AI with ‘small data’ as suggested by Ogueji, Zhu, Lin (2021), or feeding AI with new data resources.

Of course, the future of the language is not as one-dimensional and depends on many factors, but at times of language globalization, the mainstream AI tools have helped no further!

Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., … & Fung, P. (2023). A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023.

Ogueji, K., Zhu, Y., & Lin, J. (2021). Small data? no problem! exploring the viability of pretrained multilingual language models for low-resourced languages. In Proceedings of the 1st Workshop on Multilingual Representation Learning.

Latvian Presidency. (2015). Language. Latvian Presidency of the Council of the European Union. https://eu2015.lv/latvia-en/discover-latvia/language 

Please rate this

‘Energy Shaming’ is the new ‘Fat Shaming’ – Crypto thé solution?

9

September

2021

No ratings yet.

The paradigm of global climate change has been continuously shifting for decades. Currently, it is argued that global warming is the cause of human-induced emissions of greenhouse gasses. If no further measures are taken, it is possible that the Earth will be warmer by 1.5 – 2 degrees Celsius. This could lead to many natural disasters, some even affecting regions in Western countries that are in the present hard to imagine. Think about extreme drought, floods and wildfires in the Netherlands – a very strange concept to grasp. There was even a study published that suggested that more global pandemics will follow due to climate change, marking the COVID-19 pandemic as the start of many more.

It is clear that the current school of taught about the future of climate change is as concerning as hippie inspired water preserving practices in restrooms such as “If it’s yellow, let it mellow. If it’s brown, flush it down”. I would never wish someone to live in such a world, especially when your daily concerns are natural disasters (or that your toilet water is always yellow for that matter). Fortunately, decades of climate change research had not been taken for granted by many world leaders, causing them to sign the Paris Agreement in 2016. It is an agreement, in which a collective goal is stated to reach net-zero emission by the second half of the 21st century.

Unfortunately, many companies do not fully comply to the Paris Agreement. Of course, most companies will admit the existence of climate change and the importance of taking actions. However, it is the extent to which they will work on it that is questionable. At the end of the day, they want to continue making profits under conditions of certainty, which in this case is extending their current practices for as long as possible. Shell is a great example of a company being reluctant about implementing green policies as fast as possible. H&M, another ‘committed’ company with bold green statements.

To prevent cancel culture, that is banning a certain entity from their life, companies like Shell and H&M would market themselves as green. Just as how some people do not like to be fat shamed, companies do not want to be ‘energy shamed’, otherwise they could lose customers and thus revenue. Unlike fat shaming that is extremely unhealthy to be practiced on, the practice of energy shaming could actually be a great starting point for a greener tomorrow. Here is a hint how: blockchain technology.

Energy Web (EW) is a non-profit organization that uses open-source blockchain technology to keep track of which companies are using green energy for their daily operations. The idea is that individuals and companies receive incentives to join EW’s network of validators on the blockchain. Doing so, all the involved stakeholders could check upon each other if they are using clean energy or not. Since it uses blockchain technology, it is therefore impossible for companies to lie about their energy sources. Thus, if a stakeholder does not believe that Shell is using green energy, one could easily check that on EW’s network. This is quite plausible given how many established companies are already part of this network (like Shell). It is also the nature of EW being open-source that makes it more reliable.

Then the question arises: what does the validator actually check to confirm the use of green energy? Simply said, being part of EW’s network, means that you have to buy green energy from grid operators that are EW and European Union (or any other regulatory body) approved as for being a green energy supplier. The idea is that companies receive a number of certificates per volume kWh energy that they order from these grid operators. These certificates are called Energy Web Tokens (EWT), that could be validated on EW’s network. EWTs also state where the energy comes from and when it was generated. Thus, in other words, if H&M wants to check if Shell is using clean energy, it would check if Shell possesses the right amount of EWTs through EW’s network. If it appears that Shell is lying about their energy sources, Shell could be energy shamed, likely by its direct competitors like BP.

Is energy shaming going to solve climate change entirely? Probably not. However, it is a great steppingstone to encourage companies to use clean energy. Hopefully, if the world gets greener, I can retire without any stress about the environment or any peculiar hippie practices.

:

References

https://www.energyweb.org/about/what-we-do/

https://www.medium.com/energy-web-insights/issuing-certificates-with-the-ew-origin-sdk-part-ii-e18fa907c57

https://www.shell.com/media/news-and-media-releases/2021/shell-confirms-decision-to-appeal-court-ruling-in-netherlands-climate-case.html

https://www.shell.com/energy-and-innovation/digitalisation/news-room/blockchain-building-trust-to-enable-the-energy-transition.html

https://www.pressroom.ifc.org/all/pages/PressDetail.aspx?ID=18195

https://www.green.blogs.nytimes.com/2009/05/22/hippies-hollywood-and-the-flush-factor/

https://www.propublica.org/article/climate-infectious-diseases

https//www.yaleclimateconnections.org/2021/08/1-5-or-2-degrees-celsius-of-additional-global-warming-does-it-make-a-difference

Please rate this

Hey Podcast Lover! Have You Heard Of Lex Fridman?

7

October

2020

As BIM-student, it is very likely that you are interested in topics like coding, Deep Learning, Artificial Intelligence, Machine Learning, human-robotic interaction, or Autonomous Vehicles. If by any chance you also enjoy listening to podcasts, you might be in luck:

I highly suggest you to check out the Lex Fridman Podcast.

LexFridman

Lex Fridman is an AI research scientist at the Massachusetts Institute of Technology, often better known as MIT. He works on developing deep learning approaches to human sensing, scene understanding, and human-AI interaction. He is particularly interested in applying these technologies in the field of Autonomous Driving.

LexFridmanTeaching

If you know the Joe Rogan Experience, you likely are already familiar with Lex. Having worked for both Google and Tesla, Lex Fridman understands the business application of digital technologies. He uses his podcast to share this knowledge with his audience and discusses his fascination with a variety of interesting guests. This can be particularly interesting for us as Business Information Management students, as we also form the future bridge between business ventures and technological innovation. The podcast discusses similar topics like we get taught in class, sometimes going more in depth, with international research experts in those particular fields.

If you enjoy podcasts, these are some examples of Lex Fridman Podcast episodes that I highly recommend you to give a listen as a BIM-student:
RecommendedEpisodes

  • Episode #31 with George Hotz: Comma.ai, OpenPilot, Autonomous Vehicles.
    Famous security hacker. First to hack the iPhone. First to hack the PlayStation 3. Started Comma.ai to create his own vehicle automation machine learning application. Wants to offer a $1000 automotive driving application, which drivers can use on their phone.

 

  • Episode #49 with Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot.
    Elon Musk. Tech entrepreneur and founder of companies like Tesla, SpaceX, PayPal, Neuralink, OpenAI, and The Boring Company.

 

  • Episode #114 with Russ Tedrake: Underactuated Robotics.
    Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT.

 

  • Episode #120 with François Chollet: Measures of Intelligence.
    French Software Engineer and researcher in Artificial Intelligence, who works for Google. Author of Keras – keras.io – a leading deep learning framework for Python, used by organisations such as CERN, Microsoft Research, NASA, Netflix, Yelp, Uber, and Google.

These were just several examples of episodes that I enjoyed myself.

The benefit of a podcast is that you can listen it basically anywhere, and can stop listening at any time. If you are not familiar with podcasts yet or with the listening experience they offer, maybe the Lex Fridman Podcast could be your first step into this experience.

You can find the episodes of the Lex Fridman Podcast here: https://lexfridman.com/podcast/

Or check out Lex Fridman’s Youtube channel here: https://www.youtube.com/user/lexfridman

The above sources have been used as sources for this post. 5/5 (7)

Please rate this