Cybersecurity by Design

17

September

2022

5/5 (2)

We are living in a continuously digitising world where increasingly more aspects of our life are governed by IT processes. The rapid adoption of IT means that cybersecurity incidents are on the rise (ENISA, 2022). Governments and organisations alike are investing in efforts to raise cybersecurity awareness. For example, people are being trained to treat emails carefully, especially if they contain a link or file. This increased cybersecurity awareness is expected to reduce the risk of cyber incidents happening. However, research calls the effectiveness of these awareness strategies into question. Studies show that long-term changes in the digital behaviour of individuals as a result of these awareness campaigns are little (Bada, et al. 2019). Given that awareness does not prevent the users of IT systems from compromising cybersecurity, another approach is required.

The cybersecurity by design (CSD) model changes the assumption from which the awareness model is operating. Instead of assuming that awareness will prevent people from making mistakes, the CSD model assumes that individuals will make mistakes, nevertheless. The question for software developers then becomes: how can I develop my software such that the risk of compromised cybersecurity is mitigated even if careless users utilise it? Major software companies like Microsoft and Google have already designed their software with this question in mind. In Outlook, emails from unverified senders are displayed in a protected mode where links, images, and files are disabled. This prevents users from mindlessly downloading a file or link, both of which could be potentially harmful. Naturally, the user has the option to mark the sender as verified thereby enabling the content. Another implementation of the CSD model can be found in Google Chrome. Google maintains a list of websites that might put users at risk for malware or phishing. So, when users try to navigate to a potentially harmful website, a warning message is displayed, and they are prevented from entering. Here too, users have the option of navigating to the website despite this warning.

Both examples show how software developers can aid their users in navigating the digital world more safely. The CSD model thereby shows great promise for making the digital world a safer place. However, it cannot do so all by itself. Despite the criticism that the awareness model has faced I am convinced that it can work well together with the CSD model. Being made aware of risks can always have added value, especially in a CSD proof environment. A CSD proof environment can shield users from potentially dangerous content, but it is up to the users themselves to make the final risk assessment. To be able to do so, awareness campaigns can be of help. Ultimately, it is a right balance of CSD proof software and user awareness that will add up to safe navigation of the digital world.  

Sources:

Bada, et al., 2019, ‘Cyber Security Awareness Campaigns: Why do they fail to change       behaviour?’, International Conference on Cyber Security for Sustainable Society,             accessed 10th of September 2022, https://arxiv.org/abs/1901.02672

ENISA, 2021, ‘ENISA Threat Landscape 2021’, accessed 10th of September 2022,             https://www.enisa.europa.eu/publications/enisa-threat-landscape-2021

Please rate this

1 thought on “Cybersecurity by Design”

  1. Hi Lars,

    A thought provoking article. CSD seems to be a great approach which will aid users to shield them preventively from security concerns and from having to make security-related decisions. This does take away some autonomy from users and possibilities to apply the training the users received. You also make the argument that users should make the final risk assessment themselves.

    I am interested in what you think the balance could be between aiding users by CSD and allowing users control over their own risk assessments. One example I could think of is that companies integrate CSD with an AI which will caculate a risk profile of, for example a download link. The AI would scan the domainlink, perform a possible content check, and check the previous downloads. The user would then be able to make a decision based on that risk profile.

    What do you think?

    – Loc Man Tang

Leave a Reply

Your email address will not be published. Required fields are marked *