Managing Thoughts: The Future of Human Augmentation

8

September

2021

5/5 (2)

Sometimes referred to as “Human 2.0”, the idea of human augmentation is all about improving intellectual or bodily functions to maximise human capacity. While this technology can include simple human ability replication such as the use of prosthetics or the creation of organic tissues for medical purposes, human augmentation can also go as far as supplementing or even surpassing standard human abilities – possibly disrupting humanity as we know it.

In terms of augmentations, we can think of exoskeletons; something we’ve seen in sci-fi movies like Aliens or Avatar, but that are very much real today. In fact, this technology has been in use for well over 10 years – This video dates from 2011 and already shows the working technology – replacing wheelchairs in some cases, and aiding military operations in others. Here is a more recent video (2021) highlighting its use in terms of increased mobility.

Another human-focused-innovation comes from one of Elon Musk’s start-ups: https://neuralink.com/; a technology that interfaces the brain with digital platforms. While brain wearables, such as Japanese neuro-toy Necomimi cat ears, controlled by emotions, look for specific input and provide an extremely simple output, brain implants are another story.

Concept image of neuralink
Neuralink – Brain Implant Concept

Neuralink is a work-in-progress tech that tracks brain activity, to a yet unseen level. In fact, a demo of the tech was shown, which enabled a monkey to wirelessly play video games using his brain.

From an information technology perspective, it means tracking, cataloguing, and interpreting brain-based human input to transform a new type of data into information. It has the potential to disrupt hardware norms. Perhaps we don’t need remotes, keyboards or mice if we are to control digital platforms using our minds.

But it doesn’t stop there.

Let’s talk exceeding human abilities. The future that goes beyond the digitalization of the human race.

Have you heard of memory implants? This tech, albeit still in early development, has potential not only to aid Alzheimer patients recover by mimicking the signal processing of neurons, but to generally increase human psychological abilities. In fact, this hippocampal prosthesis was already tested on monkeys back in 2013, which showed significant improvement in their image-identification task performance.

In 2018, the first human implant was demonstrated, which showed up to 37% increase in memory functions in patients with memory impairments. While this tech has proven itself in re-establishing connections, it has yet to be tested in terms of improving regular human functions by means of modifying brain connections.

Together, brain input tracking and enhancement opens up endless possibilities in terms of potential, but also morals. Should decoded brain data be mingled with at all – tracked, modified, used?

Computer-brain interface technology is a concept that incites fear in many people. It’s easy to imagine the worst, such as open-memory access, hacking, or perhaps even more nefarious ends. However, change is often cause for distress, and this is no different.

One thing is sure, the tech is coming, but will we be ready for its implications? What other ways can you imagine it disrupting industries as we know them?

Please rate this

Family Firms x Digitalization – A Contradiction in Terms?

22

September

2020

No ratings yet. In the public perception family firms may be perceived as backward and little digitized. It seems clear that multinationals lead the fourth industrial revolution. But what about family firms? Can they keep pace in an ever more rapidly changing business environment?

Three main criteria define a family firm: 1) Family firms are subject to the significant and characteristic influence of one family or several families; 2) Typically, this is connected to the majority ownership of the stake respectively the voting rights of the company; 3) Family members do not have to be employed in the company and third parties can run the management (Berthold, 2010). Even though family firms may be very heterogeneous, they share crucial fundamental characteristics like long-Term orientation, institutional memory, smart diversification, and a balance between tradition and change. Family firms’ leaders run their business with the objective of handing it over to the next generation in better condition than when they received it. The family members’ long relationship with the firm and their deep industry knowledge helps them to guide the business even through troubled economic waters. They undertake smart decisions on business diversification to reduce risk and leverage knowledge. Finally, tradition and the adaption to, for example, technological chance always stay well balanced (Rodriguez, n.d.). It seems as if those characteristics may support family firms’ successful digital transformation.

But at the same time, these very characteristics may be the reason why family firms in particular struggle with digitalization. The geographical and temporal independence of information or data and family firms’ limited resources and regional roots fundamentally contrast with each other. Family firms are naturally forced to focus and thereby may oversee innovations disrupting their business. Furthermore, family firms’ hierarchical structure and centralized decision making may let many opportunities coming with digitalization unused. Finally, entrepreneurs in family firms often intend to be free from dependencies and act on their own free will. Anyway, in a more digitalized world, individual companies will be barely able to operate in isolation from the digital world and other companies (Cravotta & Grottke, 2019). The general long-term orientation of family firms is represented appropriately in the average CEO tenure: family firm leaders stay with their company for 20 to 25 years. For publicly owned firms, the average CEO tenure is six years. This increases the difficulties of coping with shifts in technology, business models, and consumer behavior (Stalk & Foley, 2012). In fact, 75% of family businesses agree that there is a need for digitalization but they may not fully understand the significance or its possible benefits given limited understanding of digitalization. Among family businesses that see a need to digitalize their businesses, 63% cited a lack of expertise and skills needed to develop and implement a digitalization strategy (KPMG, 2017).

To conclude, a family firm’s key differential resources, its organizational culture, the employees’ deep commitment and loyalty, its patient financial capital, and its long-term orientation can be critical barriers when coping with digital transformation (von Olenhusen, 2019). But still, the very characteristics of family firms mentioned above could contribute to a successful way of mastering the fourth industrial revolution. Whether a family firm’s characteristics inhibit or promote digital transformation will be subject to persistent discussions.

 

References:

Berthold, F. (2010). Familienunternehmen im Spannungsfeld zwischen Wachstum und Finanzierung. Lohmar: JOSEF EUL VERLAG.

Cravotta, S., & Grottke, M. (2019, January-June). Digitalization in German family firms – some preliminary insights. Journal of Evolutionary Studies in Business, 4(1), pp. 1-25.

KPMG. (2017, May 29). Family businesses in the digital economy. Retrieved September 2020, from KPMG: https://home.kpmg/sg/en/home/insights/2017/05/family-businesses-in-the-digital-economy.html

Rodriguez, K. (n.d.). Why Family Businesses Outperform Others. Abgerufen am September 2020 von The Economist: https://execed.economist.com/blog/industry-trends/why-family-businesses-outperform-others

Stalk, G., & Foley, H. (January-February 2012). Avoid the Traps That Can Destroy Family Businesses. Abgerufen am September 2020 von Harvard Business Review: https://hbr.org/2012/01/avoid-the-traps-that-can-destroy-family-businesses

von Olenhusen, F. (7. October 2019). Digital transformation in German family firms : internal enablers and barriers for the development of dynamic capabilities for digital transformation. Abgerufen am September 2020 von Universidade Catolica Portuguesa: https://repositorio.ucp.pt/handle/10400.14/29080

Image source: Unsplash – Markus Winkler

Please rate this

Artificial Intelligence in warfare – threat or opportunity?

29

September

2019

No ratings yet.  

aop-tekoaly-sota-tulevaisuus

The US Air Forces published a picture of the future of warfare. It depicts the usage of satellites which gather big data to be one step ahead of the opponent.

“Guns do not kill, people do.” An argument heard many times from pro-gun activists but in the future, the argument might be even less truthful than before.

You might have heard from AI-infused drones which autonomously decide who to target, and who to save. Claims such as these might generate opinions that AI should not be integrated into weapons and modern warfare equipment at all.

This black-and-white setting might turn into a grey one if on the other hand, it is possible to use AI to increase the accuracy of weapons, decrease the number of explosives used, and avoid civilian casualties.

Many would agree that AI-infused systems should not be able to start a deadly strike, but humans should be involved in that decision-making loop. However, many would also want to exclude humans from the loop as people tend to slow down processes.

Fully autonomous weapons would be dangerous, as machine learning neural networks are like the human brain – systems, which are difficult or even impossible to understand by humans.

People also tend to resent new technologies even when the advantages are larger than the disadvantages. Just think about self-driving cars which are facing a lot more regulation than human drivers. However, AI is not feasible in all of the solutions, as the margin of error needs to be 0 especially with weapons of mass destruction.

Talking about drones and terminators is trendy and creates click baits, but the real advantage of AI and digitalization lies in the processing of big data. Faster processing of information increases situational awareness and makes decision-making faster. Think about planes, which gather and process enormous amounts of information and the pilot himself could not make sense of all of this without the help of the machine. However, decision-making is still the responsibility of a human.

On the other hand, AI and digitalization may increase the possibility of vulnerability and decrease cybersecurity. The disadvantage of information is that it can be leaked or hacked. Especially machine learning is exposed to manipulation, as the machine cannot tell how it came to a certain conclusion. At least with people, you can ask their reasoning with the problem. Fortunately, it is relatively easy to modify systems in a way which makes them less vulnerable.

 

What do you think the role of information is going to be in the future? Do you think terrorists and other external forces will try to leverage the new technologies? What kind of problems do you think we could solve with AI and machine learning?

 

Please rate this

Putting the E in Education

23

September

2017

5/5 (2) In the first lecture, we spoke of Porter, and of how companies should or should not have an ‘internet strategy’. Porter argued that companies should integrate the internet into their strategy, and thus, not have an internet strategy. This, however, was an argument he made in 2001, and we brought up the examples of Instagram and Uber as counter-points. The reading material for the previous lecture expanded upon ‘digital strategy’, and in this disagrees with Porter’s arguments. As the article considers why some companies are digitally mature, while others are not, I wondered. For if digital strategy encompasses an attitude change, valuing agility and creativity and initiative – if it is, thus, much more than merely incorporating the digital world into a pre-existing strategy, if it is more than merely maintaining a Twitter page…

Then where is the digital strategy left in the education system?

To be sure, schools – for teenagers – proudly announce the purchase of iPads and laptops, instead of books, and homework is offered online instead of in printed textbook. Grades and timetables can be seen on special platforms. IT, and digitalisation, seem to be selling points for schools, these days.

And yet…

That is reminiscent of Porter’s argument, isn’t it?

For nothing fundamental has changed. Schools operate exactly the same as they have always done. Coated in a digital layer, yes, but underneath, nothing has changed. Whereas in the business world, if the articles are to be believed, it is pretty clear that a digital strategy is so much more – and so much more successful, too, if not outright necessary for survival.

It is my opinion that digitalisation may prove to be a very necessary boon to education. Before expanding upon that, I will write of how hard it is to find any concrete data that may be used to improve education across the board – not only in one school, but systematically and fundamentally – and accentuate some of the problems that, in the perception of some, plague modern-day (secondary, in specific) education systems. Lacking any large-scale concrete and comprehensive research, perhaps it is simply up to the individual – to you – to consider how or if you want education to evolve.

PROBLEMS WITH RANKINGS

I have had thoughts on this topic for some years. I have always argued that secondary education – the education given from roughly age twelve to age eighteen – is the most inefficient and useless construction that exists. The naive whining of a teenager, perhaps, unhappy with long days of school, easily dismissed with a smile and a pat on the head. I cannot back up my opinions with statistics or facts, for the many education systems of the world differ very, very much, and there is a wealth of difference between even the very highest ranked systems. One might also question the methodology behind these rankings; if one looks at the rankings of universities, one will find that they are ranked by very Americacentric standards, including sport teams and attached research complexes. In my eyes, that is not at all the primary concern of a university, and rather irrelevant to the learning process, but such factors might be important to others, of course. It just goes to demonstrate that these rankings are rather subjective.

However, I am mainly speaking of secondary education here, and of rankings of nation-wide systems. There are a multitude of rankings here, too, and again, not without their own problems. Some include tertiary education – but a list of ‘top quality universities’ seldom ranks how much and how fast one learns, nor how relevant this is – while others exclusively look at test scores – but is the stereotypical South Korean culture really desirable? Therefore, I do not think it is useful to pick a ranking, compare it to the prevalence of a digital strategy, and discuss what one can learn from this.

Even so, it is worth mentioning a few factors that I see amongst countries that consistently rank highly, purely to indicate how little one can conclude from these rankings; this would hardly be a good article if I didn’t at least give some reasons for dismissing what comes closest to a theoretical background from which we might draw conclusions. One factor, often mentioned for Finland, is the absence of homework. Finland has ranked first occasionally – but France has ranked second occasionally, and French teachers assign a lot of homework to their students. Another reason I have seen for the success of the Finnish education system, is the absence of different ‘levels’, dividing children based on how intelligent they are (in the Netherlands, for instance, there are roughly three levels, those being VMBO, Havo, and VWO). Yet the failure of the USA’s education is often ascribed to a lack of such levels, to treating everyone the same and pretending everyone can be the best of the class if they only work hard enough. As is apparent, these rankings are full of contradictions and personal biases upon closer inspection – but let it be noted that this entire article, too, is one of personal bias.

Personally, I think class sizes, teacher-to-student ratios, and ‘freedom’ are more important qualifications than grades or the amount of people in tertiary education. Indeed, it should be realised that not everyone needs to enter tertiary education, as people in the USA are realising if only by their inability of paying the outrageous costs. And in the Netherlands, wages of plumbers have soared, and there are shortages for many more such jobs – elderly care, for example – as people push themselves into higher and higher education. Not everyone needs to be trained for management or aeronautics, as without trash collectors the world would be a far worse place. Besides, the way one enters tertiary education differs per country; some demand minimum grade point averages, others demand qualifications such as the GMAT, and yet others demand a diploma from a certain level of secondary education – and then there are some universities that limit their student body, by offering only a predetermined amount of places in their courses. Countries and cultures differ.

Grades, too, are a wholly problematic factor to take into account; one might convert the 20-scale grading of the French to the 10-scale grading of the Netherlands, and similarly convert the letter-based grading of the UK and the USA as well, but does that actually paint an accurate picture? Not at all. For instance, the French literally never grade something with 20/20, for ‘nothing is perfect’. And in the Netherlands, an 8.5/10 is equal to the very highest grades of the UK and the USA. Not because Dutch students are dumber – or are we? – but because grading culture is simply different (for example: https://www.studyinholland.nl/documentation/grading-systems-in-the-netherlands-the-united-states-and-the-united-kingdom.pdf ). One might think that this is still relatively easy to account for, but explain that to the universities in the USA that require Dutch students to have an average grade of 9/10 or 10/10. From a Dutch perspective, that is utter lunacy.

But even if we could perfect this, there is still the cultural issue to consider; do we want our (hypothetical, I presume) children to spend their entire days – including evenings – at school, or working for school, just to get the highest grades possible? Do we want to pretend mental health won’t suffer under this, that such excessive competition is good, to push ourselves to the very limit of what we can achieve and beyond? You might be outraged by some of the stories that emerge from the stereotypical south-east Asian education systems.

PROBLEMS WITH EDUCATION

And it is all so useless, for what does a test do but capture an irrelevant snapshot of ourselves? What if we are sick, or menstruating, or what if we just broke up with our loved one, or what if a parent just died? We would achieve a lower test score. And what if we would take the same test twice on the same day, without any studying at all inbetween? We should achieve the exact same test score with the exact same answers, but I would bet that we actually wouldn’t. Why do we at all use tests – and not merely use them, but utterly swamp teenagers with them, with three per week being an entirely reasonable amount – to determine whether someone knows the material he or she should know sufficiently well? We all know that we mainly store all this knowledge in our short-term memory anyway – and that our final exams are not much different from the tests we received in the years (note the plural) before, so if we need to learn the same things over and over again, well, why should we not learn a day before the test and see how it goes? And if it goes wrong, we’ll have dozens of tests to make up for it, so it is better to enjoy our free time.

But what is the alternative? Speaking with a teacher, a private conversation, so that the teacher may ascertain how much we know? But that is even more prone to bias than an exam of open questions. It is also harder to standardise, harder to organise – costing far more time and labour; what will the other students do in the meantime, and how long would they all even need to wait? – and so on. And while some students might be able to better expand upon their answers, indicating that they did actually grasp a deeper principle, other students might grow nervous and anxious, and preform worse. What, indeed, can one offer as an alternative to tests? It is a fact that most multiple-choice tests are the single worst method of measurement, teaching only rote memorisation and often relying more on literary tricks to confuse the student and on the guessing ability of said student than on actual knowledge – even the very author of multiple-choice tests indicated that – but they are also the easiest to grade and not prone to any bias at all (one of the many, many articles discussing multiple-choice tests: https://www.theguardian.com/commentisfree/2013/nov/12/schools-standardized-testing-fail-students ).

It makes sense to not do any homework, for apart from perhaps mathematics, it is not as if homework actually adds anything useful. With mathematics, the homework often resembles the test, and you might need to develop a certain proficiency in how to tackle a problem. On the other hand, with history, for instance, one is just scouring the text to find this or that date, which one will promptly forget, and one will then eventually need to relearn these dates for a test. It is all an exercise in futility and best ignored, despite what teachers profess.

Or is it? Does it actually make sense? A study often cited is a meta-analysis from Harris Cooper ( http://journals.sagepub.com/doi/abs/10.3102/00346543076001001 ), and this casts a different light on the above paragraph. This study shows a correlation between doing homework and achieving higher grades – but it is a very weak correlation, for one, and wouldn’t it make sense that motivated students, motivated enough to do their homework, are also motivated enough to study just a bit longer for tests? There are a plethora of articles and authorities that argue in favour or against homework, drawing upon their own personal experience as professors, even having conducted small-scale experiments in their own classrooms, but they remain anecdotal at best. As already said; Finnish education entirely lacks homework, but French education is full of it, yet they are both top contenders for various education rankings.

But then, the exact same argument, of there being little reason to do homework, applies to sitting in a classroom; can one ever look back on a day of secondary education and think ‘in class A, I learned B, and in class X, I learned Y’? One can do this for university, yes, but for secondary education? I highly doubt it. Some would argue that schools have a purpose beyond educating; they are social institutions, where children find new friends to play with and to discuss the perilous changes that come with being a teenager, growing into an adult. It is not for nothing that schools offer physical education – being physically active with sports and the like – or that more and more schools choose to only supply their cafetaria with healthy foods and drinks. Some would even argue that schools keep children off the streets, while their parents work, and that this daycare-esque function is also an important part of school.

These are but three problems, viewed from the eyes of students. Schools themselves have problems too, such as the increase of administrative work, the increase in parental demands, the decrease of teachers, the decrease of funds, or the fact that teachers, in most countries, do not enjoy a reputation similar to that of doctors. The precise factors differ per country, of course, but they are factors that influence how well an education system – hypothetical or actual – can function. The key takeaway here, I believe, is that again, studies and facts are scarce, and that it is very easy for personal biases to seap into this. As do my own, no doubt.

DIGITALISATION

So far, I have lightly touched upon the various rankings of education systems, and have exposed the differences in culture and common practices and attitudes between countries, and I have further exposed a variety of problems with education. I think the cultural aspect is very important, and that it strikes at the heart of the question of digital strategy. For what is strategy without vision? Any company would outline both of these in the same breath, on the same webpage. A question we should ask, then, is what exactly our vision is for the ideal education system?

It might be worthwhile to take a trip through history, though I will do so only very quickly. A multitude of articles can be found online, most of them written in the USA, speaking of how the current education system originates from the industrial era. An era were rote memorisation, obeying orders, and mindlessly doing the same tasks over and over and over again, were more valuable than they are now. For in the modern world, we value almost the exact opposite; incentive, intelligence, creativity, and freedom, for example. This, however, paints too stark a contrast, and though I could dedicate paragraphs to this, I would instead advise you to do your own research, should you wish to do so (or you could read a random article, but be aware of biases: http://hackeducation.com/2015/04/25/factory-model ).

One might argue that some teachers in secondary education are still prone to exiling students from the classroom the moment their authority gets challenged. One might counter-argue that some teenagers are wont to cause chaos and disrupt class if a teacher does not remove them. One might speak of the unfairness of punishing people for not doing homework, of conducting so many tests, or of a host of other things. Regardless of how fair this characterisation is, it does expose a structure that hasn’t changed in more than a century – yet how many businesses still operate under the principles of yesteryear? Merely replacing textbooks with laptops won’t change anything. Just as merely hosting a website didn’t change anything. Yet businesses were forced to change, by external forces, lest they go bust. Schools have the benefit of being state-maintained, in many cases, and they enjoy a very different status from commercial businesses, of course. But even so, there are already external forces at work, in a sense, and some schools already make use of them.

WhatsApp and Facebook facilitates the communication of students outside of school, and though largely employed for social interaction, they are also used for sharing knowledge and for answering questions pertaining to school. This adds a whole new dimension to interaction between classmates and working together on homework or projects, as do tools such as Dropbox and Google Drive. Websites such as Coursera or Khan Academy can educate people in far more subjects than any given school could, but more importantly for the moment are websites such as Google or Wikipedia; can you imagine going to libraries, buildings of brick stones, looking through dusty tomes, physical books, to do research?

A growing number of schools are changing how they approach education, making use of these facilities. Working in groups is more encouraged, partly because the modern-day labour market does so, and partly because the facilities exist to properly do so. Instead of rote memorisation, we may conduct theoretical research or manage practical projects, given the freedom to do whatever we think is best and have a professor judge our work. In this sense, secondary education is ever so slowly starting to resemble tertiary education. From my own – and my six years younger sister’s – experience, I can point at, for instance, how she was allowed to work on a research project with students from the Erasmus MC. Offering this kind of ‘real world experience’, or ‘hands-on experience’, I think, will become more prevalent.

Might we see exams being done by small groups of students, debating together over what the right answer is, even being allowed to use Google to find it, perhaps? It would closely resemble the real world, where it is not so much factual knowledge but tacit knowledge, experience, that is valued. There are even some calls to abolish rote memorisation altogether – what use does mathematics have, when you have a calculator, or language, when you have a translator and a spelling check? – but that, I believe, would be the wrong thing to do. For one, both programming – and in this, it is similar to mathematics – and languages create a certain kind of mindset, of problem-solving and of analysis. Studies show that being proficient in multiple languages has all kinds of benefits far beyond merely knowing those languages; benefits concerning cognitive tasks, or multi-tasking, or protection against Alzheimer’s, and so on (studies on this are easy to find, for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3583091/ ).

Even so, group work is how projects are conducted at tertiary education, and there are still (open and multiple-choice) exams there. It need not be a dichotomy. But I doubt that secondary education can replicate all that tertiary education offers – and why should it, when secondary education inherently offers a far broader and far less deep curriculum? Professors would need to have years of experience in their given field, and while we can find such professors at universities, it is different for secondary education. Besides, teaching in the classrooms of secondary education is far more interactive than giving a lecture in a university hall, and secondary education is also where teenagers grow into adults. This requires a more social skill set that professors drawn from ‘the real world’ might lack. And all this is without taking wages into account; secondary education already has large problems with attracting teachers, and this would make that problem far worse.

Perhaps we might see more ‘freedom’ at secondary education, with students not needing to attend class or to do homework depending on their average grade for a given subject. Perhaps there might be more opportunities for students to learn about their preferred subjects, through the internet, with teachers serving as a guide by indicating relevant material and answering questions. Perhaps education might then become a place of learning in the broadest of senses, with a student equally able to learn about Dutch as this student is able to learn about astronomy. In the farther away future, we might well have brain-to-brain interfaces, or at least brain-machine interfaces, completely upsetting the very concept of education. But all these ideas seem to be far-fetched, running into problems ranging from money to government mandates.

CLOSING WORDS

There are many, many ideas, that can be mentioned, and it is not the purpose of this article to deeply explore them all. To me, it seems to be clear that digitalisation can achieve great things, in a multitude of directions. I think we are witnessing small changes here and there, largely staying within the confines of the last few hundred years but also seeking to better connect with tertiary education and the labour market. More group work, thanks to the rise of the internet, and more freedom, with student and teacher being more equal. Much more might be done, but that will require the education system to be re-invented, starting from the very vision that underpins it. A task too great for any single government, I believe.

But that doesn’t mean we can’t think about our ideal situation, and ever so slowly try to move towards it. You should form your own opinion by your own research – if you wish to do so at all; it is better to hold no opinion than to hold an uninformed one – on what your ideal is. I am by no means an authority on the subject of education systems, and I do not have an ideal ready to be feasibly implemented right at this very moment. But if nothing else, you, the reader, will at least have spent some time thinking about this, and thoughts are the seeds for all change everywhere.

So what would you like to see changed? What can be improved? What are your thoughts, your ideas, your views?

Please rate this

Anonymous, hackers or hacktivists?

9

October

2016

5/5 (1) With the digitalization of our society, the number of internet users has increased as well. In the year 2000, 414,794,957 (6.8%) users had access to the internet. Now, 16 years later 3,424,971,237 (46.1% of total population)users have access. (Internetlivestats, 2016). Of course, how more users on the internet, how more interesting it is for hackers to abuse their knowledge of the technical aspect of the internet.

Today I want to talk about one of the most well-known hackers organization Anonymous. Anonymous was founded in 2003, on the imageboard 4chan. Their members, so-called ‘Anons’ consider their selves as hacktivists, but what is the difference between a hacker and a hacktivist? A hacker is a person who gains unauthorized access to computer system. This can be either for good or bad reasons, but usually it is to deal damage to the system or in order to retrieve valuable information. (Merrian-webster.com, 2016) A hacktivist on the other side, is a person who gains unauthorized access to a computer system and carry out various disruptive actions as a means of achieving political and social goals. (dictonairy.com, 2016)

We can also differ white-hat hackers, which have good intention and black-hat hackers, which have bad intentions. White-hat hackers are usually computer system testers or security experts. Black-hat hackers are the hackers where you think about when people are talking about hacking, breaking into systems and dealing damage to those systems.

So before we place Anonymous into these categories, what have they done in the past 13 years? Let’s list up some, to my opinion, good actions of Anonymous:

  • Operation Deatheaters. (thousands of pedophile networks were shut down and names of visitors were published).
  • Shutting down hundreds of accounts, forums, websites of IS supporters
  • Leaking Bank of America corrupt and unfair mortgage practices.

On the other side, they did certain things which aren’t justifiable:

  • Hacking Sony’s systems in 2011. Caused a 10% stock value loss and published tens of thousands of private emails.
  • Putting personal information of HBGary Federal lawyers after Aaron Barr (CEO) said he had ‘cracked’ Anonymous. (Bright, 2012)

These are just a few examples of the hundred things Anonymous has claimed to do. Most good things they do have to do with publishing information and shutting down ‘bad’ websites. The bad things Anonymous is doing is mostly reacting to certain, in their eyes, bad deeds. Also, they respond to threats, like the HBGary Federal case.

So whether Anonymous is a good or a bad organization is a matter of perspective. I wouldn’t say they are white-hat or black-hat hackers/hacktivists. In 1996  at the DEF Con Hacking Conference they made up a new kind of hackers/hacktivists: grey-hat hackers/hacktivists. These are hackers/hacktivists which may sometimes violate the law, but not have malicious intentions like the black-hat hackers. I would say they are grey-hat hacktivists, what do you think?

Thank you for reading!

Feel free share your knowledge and opinions about this topic!

(Also, if you’re interested in Anonymous, check out the movie ‘We Are Legion'(2012)).

 

 

 

References

Bright, P. (2012, October 03). With arrests, HBGary hack saga finally ends. Retrieved from http://arstechnica.com/tech-policy/2012/03/the-hbgary-saga-nears-its-end/

Hacker. (2016, October 09). Retrieved from http://www.merriam-webster.com/dictionary/hacker

Hacktivist [Def. 1], (2016, October 9) Retrieved from http://www.dictionary.com/browse/hacktivist

Internet users. (2016, October 09). Retrieved from http://www.internetlivestats.com/internet-users/

La Monica, P. A. U. L. (2014, December 15). Sony hack sends stock down 10% in past week. Retrieved from http://money.cnn.com/2014/12/15/investing/sony-stock-hack/

Please rate this