Does Easy-To-Use Local Image Generation AI Applications Have Commercial Potential?

10

October

2024

No ratings yet.

There are many AI applications for image generation, but many of them are based on the Internet and the cloud, and are charged by subscription or based on the number of times used. Unlike these, Stable Diffusion WebUI, as an open source and free image generation tool, has attracted widespread attention with its powerful capabilities. However, it also has a relatively high threshold for use. Based on my experience in using Stable Diffusion WebUI, I will briefly talk about the potential commercial prospects of low-threshold, easy-to-use local image generation AI applications.

Advantages of Stable Diffusion WebUI

The advantage of Stable Diffusion WebUI is not only that it is completely open source and free, but also that its model is deployed locally (that is, similar to the end-side AI mentioned at the iPhone16 conference). Since it runs directly on local hardware, it does not need to upload any data to the cloud. This means that users can fully call on the computing resources of their own devices without connecting to the network.
Compared with cloud applications such as MidJourney, the protection of data privacy is a major advantage of local applications. Neither the user’s input nor the generated images are uploaded to the server, but are processed locally, which is suitable for users who are sensitive to data security and privacy.
At the same time, because it runs on local hardware, its performance can be very high. It can flexibly call on the computing power of a high-end GPU, which is especially suitable for users with high performance hardware. It is not affected by network bandwidth and gives full play to its powerful image generation capabilities. All this makes it an extremely excellent tool.

The Threshold of Using Stable Diffusion WebUI

Although Stable Diffusion WebUI is powerful, its threshold of use also makes many ordinary users discouraged. This is my personal experience in installation, model import and debugging parameters.
First of all, the installation and operation process is relatively complicated. You need to download tools and deal with many environmental dependencies, such as Python and other necessary libraries. These steps are relatively complicated for many new users. Without the help of various forums, blogs and GPT, I would definitely not be able to do it.
In addition, the selection and import of models is also a big challenge. Although there are a large number of free model resources on the Internet, it is not easy to find a model that suits your needs. It will also take a lot of time to choose the SORA model. In the end, you need to adjust many parameters including the number of steps, sampling method, and resolution by yourself to achieve the desired effect. Its complexity also makes it difficult for users to control.

Civitai – A platform for sharing, discovering and downloading Stable Diffusion models and AI painting creation resources

Launch Simple Applications to Attract More Users

If we can launch a simpler and easier-to-use app based on Stable Diffusion WebUI, which is aimed at the consumer market and the general public, we will be able to push it to a wider market.
The key is to lower the threshold for use. By integrating environmental dependencies, such as pre-installing all necessary libraries and operating environments, users can skip the configuration process. The application can also provide a one-click model download and purchase channel to help users quickly obtain high-quality generated models.
At the same time, user-friendly interface design is also essential. While following various interactive design principles, optimize UI and UX, and simplify complex parameter adjustments into several key options. Let ordinary users easily generate the pictures they want without losing flexibility. For example, users can control the quality and style of the generated pictures through simple sliders or preset modes, avoiding complex technical details.

Commercial Potential

From a commercial perspective, local image generation applications based on Stable Diffusion WebUI have broad prospects.
The use of Stable Diffusion WebUI not only avoids the occupation of cloud resources, but also makes pricing more flexible. Compared with the payment model of conventional online image generation AI such as MidJourney, Stable Diffusion’s local application can adopt a variety of pricing strategies, such as buying out software, paying for advanced models, or allowing users to use the generation service unlimited times within a certain period of time through a subscription system.
Overall, through the simplified operation experience and flexible and low pricing, we may be able to build an “unpopular” image generation AI application based on the Stable Diffusion WebUI, attracting a wide audience and looking forward to its large-scale application.

Please rate this

Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Digital Disruption: How Robotaxis Are Shaping the Future of Taxi Market

20

September

2024

No ratings yet.

Current state of robotaxis

Since Google began developing autonomous driving technology in 2012, we have witnessed rapid advancements in the field. In 2020, China’s Baidu launched its autonomous driving trials, further accelerating the technology’s progress. By 2021, both Cruise and Waymo began limited commercial operations in the U.S. As of May 2024, Baidu’s RobotGo has launched Robotaxis across major cities in China. In the first half of 2024, RobotGo completed 680,000 rides in Wuhan, reflecting the growing adoption of Robotaxis in China.

How does Robotaxis work?

  • Customer experience

The customer experience of using a Robotaxi can be described as exceptionally smooth. Users simply request a ride through an app, much like Uber, by entering their pick-up and drop-off locations. The Robotaxi autonomously drives to the pick-up point, and customers unlock the door using the app. Once inside, customers can confirm the driving route, adjust the temperature, and even play music. Upon reaching their destination, the fare is displayed, and after confirming the bill and making payment, the ride experience is complete.

  • Algorithms and technologies behind Robotaxis

Robotaxis use technology like LIDAR, cameras, radar, and AI to navigate and operate without a driver. These cars have sensors to see what’s around them, software to plan routes and avoid obstacles, and controls for steering and braking. These technologies use high-definition mapping and, in some cases, vehicle-to-everything (V2X) communication to exchange data with infrastructure and other vehicles.

There are two main types of Robotaxis. The first involves retrofitting existing vehicles with sensors and software. This method is faster and cheaper but may limit design effectiveness. The second approach involves building vehicles from scratch, optimized for autonomy with advanced sensors and no manual controls, but at a higher cost and longer production time.

Robotaxis companies currently hire safety supervisors to make sure customers are safe. In-vehicle safety operators monitor the vehicle’s autonomous systems and are trained to take control in emergencies. Some companies are also transitioning to remote supervision, where operators monitor and intervene from a control centre.

The Impact of Robotaxis on Passengers, Drivers, and the Automotive Industry

  • For passengers

Robotaxis affect passengers in many ways. Fares are typically 60-80% of what Uber charges, making it a cost-effective alternative. Robotaxis are safer, especially for women travelling late at night. They follow traffic laws, so there is less speeding and reckless overtaking. Robotaxis are also quiet and private, which is good for business people.

However, passengers have concerns too. Some people think Robotaxis drive slowly or don’t handle traffic well because they follow the rules too strictly. There are also privacy concerns. Robotaxis have so many sensors that may capture personal information, which raises questions about passenger privacy. They could also be vulnerable to hacking, which could compromise passenger safety. Additionally, in situations of sudden network failure, passengers worry about how the Robotaxi would respond, highlighting the need for robust backup systems.

  • For Taxi Drivers

For taxi drivers, Robotaxis pose significant challenges. Robotaxis compete directly for passengers, reducing the customer base for traditional taxi drivers. Sharing the road with Robotaxis also introduces complications. As Robotaxis strictly follow traffic rules, they often hesitate to turn, occupy lanes for long periods or change lanes slowly, all of which can disrupt the flow of traffic and reduce the driving efficiency of taxi drivers, causing frustration and delays.

However, the situation creates new opportunities as well. Companies such as Robotgo are hiring numerous remote supervisors, which could represent a new career path for taxi drivers whose roles may be taken over by Robotaxis.

  • For Car industry

The rise of Robotaxis is transforming the automotive industry by shifting consumer preference from car ownership to mobility services, resulting in a decline in traditional vehicle sales. Manufacturers are adapting by redesigning vehicles specifically for autonomy, eliminating traditional controls to enhance passenger comfort and safety. Additionally, significant investments in advanced technologies, such as AI and sensor integration, are necessary to remain competitive and ensure the safe operation of these vehicles. As these changes take place, the industry will focus on innovation and the evolution of urban mobility.

Conclusion

In my view, Robotaxis represent the future of the taxi industry. I would definitely try and choose to ride in a Robotaxi, as I can no longer tolerate rude, reckless drivers. While there are currently many safety and technical concerns, I believe that with improved industry regulations and advancements in technology, Robotaxis will become increasingly sophisticated—much like many new technologies that faced initial skepticism but later gained widespread acceptance.

So, what are your opinions? Would you choose to ride in a Robotaxi instead of an Uber? Feel free to share your opinions in the comments!

Please rate this

Challenges of Responsible AI usage in Dutch Government Institutions

18

September

2024

No ratings yet.

As the world changes with the developments of artificial intelligence (AI) in almost all aspects of society, AI is an unavoidable technology for the government to implement in their operations. The Dutch government has embraced the transformative potential of AI, but responsible usage within its institutions remains a challenge. Several initiatives have been introduced to harness AI responsibly, while keeping legal and ethical concerns in mind. Generative AI is being explored for applications across various governmental services. However, its rapid integration poses risks, including potential biases, data misuse, and threats to public trust.

A major concern is ensuring that AI adheres to principles of fairness, accountability, and transparency, especially when used in areas like public administration or law enforcement. The Dutch government has committed to conducting thorough risk assessments for AI projects, as well as algorithm impact evaluations to identify and mitigate risks before deployment ​(Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, 2024)

One notable example is the AI-driven pilot projects in Amsterdam aimed at addressing societal issues, such as equal opportunities and media disinformation. These initiatives demonstrate how AI can be leveraged for good, while also underlining the importance of public-private partnerships to align AI development with societal values (Universiteit van Amsterdam, 2023). Despite these positive efforts, the government faces challenges in regulating AI without suppressing innovation. The AI Act, which aims to govern AI risks at the European level, plays a critical role in setting the standards for responsible AI use​ (European Commission, 2024).

In my opinion, fostering a strong AI implementation in the Netherlands requires not only robust regulation but also a measurable investment in AI experts and infrastructure. The Netherlands is taking essential steps toward responsible AI usage, but as discussions around governmental AI usage continue, there’s a need for public dialogue about the balance between innovation and ethical governance.

References

Ministerie van Binnenlandse Zaken en Koninkrijksrelaties. (2024, January 18th). Dutch government presents vision on generative AI. News Item | Government.nl. https://www.government.nl/latest/news/2024/01/18/dutch-government-dutch-government-presents-vision-on-generative-ai

Universiteit van Amsterdam. (2023, May 22nd). ‘Make the Netherlands a frontrunner in responsible AI’. https://www.uva.nl/shared-content/uva/en/news/news/2023/05/make-the-netherlands-a-frontrunner-in-responsible-ai.html?cb

European Commission. (2024, September 10th). AI Act: Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
 

Please rate this

Toxic Code: How Poisoning Attacks Are Undermining AI Systems

16

September

2024

5/5 (3)

In the rapidly evolving world of artificial intelligence (AI), not all advancements are aimed at making systems smarter. Some are designed to make them fail. Enter poisoning attacks, a new form of sabotage that can turn intelligent systems against themselves. But the question is, how does it work and should we really care about it?

What Are Poisoning Attacks?

Imagine teaching a student a mix of good and false information. If you sprinkle enough false information in the lessons, even the brightest student will come to some incorrect conclusions. In AI, poisoning attacks work similarly: the data used to train the AI model is corrupted by an attacker with the intent to make errors once the AI is functioning (Shafahi et al., 2018). For example, consider a self-driving car that is trained on images of road signs. If an attacker can poison the system with even a small number of false images that label a “stop sign” as unreadable, the car could misunderstand traffic rules and be dangerous not only to the people in the car, but to everyone on the street (Wang et al., 2023).

(Dash & Bosch AIShield, 2023)

Real-World Impact: Why Should You Care?

Poisoning attacks aren’t just a theoretical risk, they are a real threat in AI systems today. Take for example GitHub’s CoPilot, an AI run code completion system that helps developers autocomplete their code in real time (GitHub, 2023). In this case, an attacker would poison the CoPilot and steer it towards generating vulnerable code that has a number of security defects (Improta, 2024). While this seems like a problem that only impacts coders, this can result in problems for other people as well. Vulnerable code can result in everyday people losing their private data, such as the recent Social Security Number breach in the USA (Chin, 2024). A relevant example on how poisoning attacks can affect your everyday life is through social media. Algorithms could be altered in order to determine what goes viral or to spread misinformation by pushing fake news to a large number of users. This is a scary thought as news is being filtered more often by AI.

Defending Against Poisoning: A Losing Battle?

Defenses against poisoning attacks are evolving everyday, although attackers often seem to be one step ahead. Additionally, anomaly detection systems are being integrated into AI systems, but the question is, how much of the data needs to be infected in order to not be considered an anomaly anymore (Huang et al., 2022)? As Alexey Kurakin et al. (2016) highlight in “Adversarial Machine Learning at Scale”, vulnerabilities are being exploited by attackers in real time, creating a race between “poison” and “antidote”. However, the poison is being treated with continuous advancements in AI security and collaboration among researchers. Defenses are growing smarter, aiming to outpace attackers, making the future look promising for AI based systems.

Conclusion: Can We Trust AI?

AI holds a great deal of potential but is just as good as the data we feed it. The reality is that this is just the beginning of a fight to secure data and by extension, AI itself. The future of technology is being shaped by these poisoning attacks so stay tuned and keep your eyes out for misinformation. And don’t forget, data is the driving force behind everything! 

References

Alexey Kurakin, Goodfellow, I. J., & Samy Bengio. (2016, November 4). Adversarial Machine Learning at Scale. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.1611.01236

ChatGPT. (2024, September 16). A Hacker Injecting Poison into an AI Brain Using a Syringe, in a Panoramic Style

Chin, K. (2024, February 20). Biggest Data Breaches in US History. UpGuard. https://www.upguard.com/blog/biggest-data-breaches-us

Dash, M., & Bosch AIShield. (2023, May 9). Understanding Types of AI Attacks. AI Infrastructure Alliance. https://ai-infrastructure.org/understanding-types-of-ai-attacks/

GitHub. (2023). GitHub Copilot · Your AI pair programmer. GitHub. https://github.com/features/copilot

Huang, S., Bai, Y., Wang, Z., & Liu, P. (2022, March 1). Defending against Poisoning Attack in Federated Learning Using Isolated Forest. IEEE Xplore. https://doi.org/10.1109/ICCCR54399.2022.9790094

Improta, C. (2024). Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code. https://arxiv.org/pdf/2403.06675

Shafahi, A., Huang, W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., & Goldstein, T. (2018). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/pdf/1804.00792

Wang, S., Li, Q., Cui, Z., Hou, J., & Huang, C. (2023). Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Systems with Applications, 227, 120295–120295. https://doi.org/10.1016/j.eswa.2023.120295

Please rate this

Deepfakes and digital business models

16

September

2024

5/5 (1)

Deepfakes are AI-generated media that can mimic real people’s appearances and voices. These have rapidly evolved from a technological curiosity to a significant force which reshape digital business models. Nowadays deepfakes offer a wide range of commercial applications from personalized advertising and virtual influencers to content creation and customer service automation (Herbert Smith Freehills, 2024; Ferraro et al., 2024). However, as the technology advances this brings complex ethical especially about misinformation. 

Digital business models are using deepfake technology to innovate and enhance customer engagement. Companies are exploring virtual influencers who engage with audiences and offer brands a new way to connect without the use of human influencers. Deepfakes also play a role in personalized marketing where tailored AI driven content creates more compelling advertisements. However, the rise of deepfakes goes hand in hand with ethical challenges. Which includes concerns about authenticity, consent, and misuse. As businesses adopt these technologies, they must carefully consider the potential risks alongside the opportunities.

Ethical challenges connected to the use of deepfakes are significant and particularly now when digital transformation spreads across industries. A major concern is the use in spreading misinformation such as deepfake videos of politicians on big social media platform like Meta and X. This as a result can have an undermining impact in public figures and institutions. Additionally, deepfakes involve consent and privacy issues. This because the media can be created and shared without the permission of an individual. Therefore, companies aiming or using this new technology must implement ethical guidelines that clearly label synthetic media and do their utmost best to prevent misuse and consent violations.  

As deepfakes influence digital business models companies must balance innovation with responsibility. While deepfakes offer immense potential in marketing, entertainment, and customer engagement, they also pose significant risks. Companies need to explore these opportunities but must also set ethical standards and develop safeguards to protect individuals and society. The future of deepfakes in business depends on leveraging their potential while carefully managing ethical implications.


Deepfakes in advertising – who’s behind the camera? | Herbert Smith Freehills (2024). https://www.herbertsmithfreehills.com/notes/tmt/2024-02/deepfakes-in-advertising-whos-behind-the-camera

Ferraro, C., Demsar, V., Sands, S., Restrepo, M., & Campbell, C. (2024). The paradoxes of generative AI-enabled customer service: A guide for managers. Business Horizons67(5), 549–559. https://doi.org/10.1016/J.BUSHOR.2024.04.013

Please rate this

Netflix’s (seemingly too?) Perfect Recommendation System.

7

September

2024

5/5 (1)

Netflix is widely seen as one of the world’s most successful streaming platforms to date. Many might accredit this success to its broad library of fantastic titles and simple, yet effective, UI. However, behind the scenes a lot more is going on, which keeps users on the platform longer, and most importantly, reduces subscriber churn.

While Netflix has 277 million paid subscribers across 190 countries, no user experience is the same for any of these users. Over time, Netflix has developed its incredibly intelligent Netflix Recommendation Algorithm (NRE) to leverage data science, and create the ultimate personalized experience for every user. I think most of us are aware of some personalization algorithms, but not the extent to which they go!

The NRE is composed of multiple algorithms that filter Netflix’s content based on a user’s profile. These algorithms filter through more than 5000 different titles, divided in clusters, all based on an individual subscriber’s preferences. The NRE works by analyzing a wealth of data, including a user’s viewing history, how long they watch specific titles, and even how often they pause or fast-forward. This, in turn, results in videos with the highest likelihood of being watched by the user, being pushed to the front. Which is, according to Netflix, essential, since the company estimates that they only have around 90 seconds to grab a consumer’s attention. I think, as consumer attention drops even further (with apps like TikTok destroying our attention span), this might become even more of a problem in the future. I mean, who has the time to sit down and watch a whole movie these days??

This also ties into the concept of the Long Tail which we discussed, which refers to offering a wide variety of niche products that can appeal to smaller audience segments. Netflix can now surface lesser-known titles to the right audiences using its recommendations algorithms. While these niche titles might have never been discovered by users in the past, Netflix can now monetize the Long Tail of its Library. You must have definitely noticed that your family or friends have titles on their Homepage that you would never see on your own, and this is the NRE at work.

While this model is largely successful, it might raise concerns around content bias. For example, Netflix’s use of different promotional images for the same content based on a user’s perceived race or preferences has sparked debate. Although the intent is to tailor recommendations more effectively, it risks reinforcing stereotypes and narrowing the scope of content that users are exposed to.

Ultimately, user data is exchanged for a super personalized experience, though this experience can sometimes be flawed. What do you think about Netflix’s NRE and its effects on users? Do you think this data exchange is fine, or would you rather just see the same Homepage as everyone else?

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

Snapchat’s My AI – A Youthful Playground or a Privacy Nightmare?

19

October

2023

No ratings yet.

A post on this very blog site from 2018 called Snapchat a platform in decline, and I agree with that statement. Not since my high school years have I regularly used Snapchat to communicate with someone. After a long period of inactivity and countless notifications piling up, I decided to open the app some months back and was met with a notification about updates to their Privacy Policy. At that moment I did not give it much attention, just agreed to the terms, and went to the user interface. A new feature at the top of the Chat function caught my eye, My AI.
My AI is a customizable, user friendly, engaging AI chatbot and is one among the many actions Snapchat has undertaken to regain their popularity. Remember those times when you opened Snapchat and disappointedly closed it, no new notifications and no one to talk to? My AI solves that issue, giving constant company to you in the form of information and entertainment, designed to better understand and cater your preferences. It is effectively your AI best friend, but less transactional than other AIs.

I don’t know if it was curiosity or boredom, but my mind immediately raced back to the updated Privacy Policy and I decided to give the whole thing a read. As of 15th August 2023, their new Privacy Policy contains some important changes. A major change here is expanding the amount and type of data Snapchat stores, most recently including conversations with My AI. This is on top of all the information Snapchat already amasses from their users, such as usage, content, device, and location information. “But every social media platform personalizes their user experience and employs targeted advertising?”, you might say. Point noted, which is why I moved on to how this data is being used by their affiliate companies. The screenshot below is the only information I could find, and clicking on the link would only lead me into an endless loop within the Privacy Policy statement.  

If I still haven’t been able to make you raise your eyebrows, I urge you to recognize Snapchat’s target group: teenagers.
Did your fourteen-year-old self have the same level of digital maturity and behavior that you currently possess? Did you truly understand the extent to which your data is collected, let alone the fact that this data determines the content you interact with on a platform? And finally, consider the rationale of using Snapchat: Why send pictures or texts that are deleted after being opened unless you do not want them to be saved? Other than by Snapchat, of course.

Attached below is the help my AI best friend on Snapchat provided me about a ‘common’ problem for teenagers. Make of that what you will.

Please rate this

AI-Powered Learning: My Adventure with TutorAI

16

October

2023

No ratings yet.

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Please rate this