The Growing Threat to AI in Education

Thief getting out of computer monitor. Hacking,

Hackers are targeting artificial intelligence (AI) systems in education – Gary Henderson explores the risks of AI being manipulated, from data poisoning to supply chain attacks

Artificial Intelligence (AI) is the big talking point in education circles in terms of how it might help save teachers and other school staff time, in terms of how it might support students and level the playing field, and in terms of several other potential benefits. But AI is an IT system much like many other systems and like most IT systems it could be manipulated by criminals or other unscrupulous individuals towards unintended ends. So, what are the risks in terms of AI being “hacked”?

Data Poisoning and Bias Exploitation

An AI model uses training data to help generate its output. Think ChatGPT and other Generative AI – where training data scraped from the internet is used by the AI to allow it to respond to a prompt. Data poisoning involves manipulating the training data to make the AI behave in a way other than how it would be expected to behave in normal circumstances. An easy example of this might be a person who walks down his street with a trolley filled with mobile phones. The data (the phones) which moved slowly along the street, provided to Google suggested a traffic jam which then resulted in Google Maps seeking to direct people around the area. The individual had in effect poisoned Google’s AI by manipulating the data it was receiving. Similarly, training data can be skewed to produce biased outputs, favouring certain groups through deliberate manipulation.

Model Inversion / Data Extraction

If we consider AI-based personal assistants – systems which act as our assistants helping us manage our calendars, emails, etc. – these AI have access to lots of data about us to provide us the personalised responses we expect from them. The AI training data is the data we provide access to, such as our email, documents and calendar, plus our interaction with the AI. An attacker might therefore be able to ask questions of the model in such a way as to get it to disclose information about us, about our activities, behaviours and preferences. Basically, the AI model has been inverted such that rather than learning from our actions and data, its training data set is being used to disclose information about us.

Adversarial and Backdoor Attacks

This type of attack is about using carefully crafted inputs to trick an AI into incorrect classifications and predictions. One example which has been widely shared relates to the use of AI for visual identification of objects within CCTV footage such as being able to identify when a person enters a room. By using a carefully crafted graphic on a large piece of card held by the person as they move around the room, the AI is unable to identify that there is a person in the room. They basically become all but invisible to the AI. Now AI can then be trained on how to deal with this, however it is likely we will continue to find little glitches in AI that some may seek to manipulate and exploit. In some cases, criminals may even introduce these flaws during development to exploit them later.

API Exploits and Unauthorised Access

As AI becomes more common, we will increasingly need to interface it with other solutions and technologies. Generally, this involves Application Programming Interfaces (APIs) which allow different applications to interact and communicate. The issue here is that such APIs might be manipulated. This could be simply to try and gain access to the AI system and its data, however it could equally be for the purposes of manipulating the training data, model poisoning, extracting data via model inversion, or a variety of the other attack methods mentioned above.

Supply Chain Attacks

And linked to the above, AI solutions and the companies which run them will make use of several different solutions and companies. AI providers rely on multiple third-party services for hosting, analytics, cybersecurity, HR and more. In doing so, each of these third parties which provide goods or services is a potential entry point for a criminal into the company and its AI, and its data set. As integration and connectivity grow, so does the risk across the supply chain.

These are just some of the attacks AI may face, but as AI use grows, new threats will emerge. While most risks are unlikely to directly impact schools, awareness is key. Few schools run AI locally, where direct attacks might be a concern, though some do. The focus should be on the AI tools provided by external companies, which may be vulnerable to the threats outlined above, indirectly affecting staff and students. For schools, the primary risks remain unintended AI responses or data leaks, which must be considered in usage and risk decisions.

It’s also important to recognise that while AI is a target for attacks, it will also be used for both defence and offensive cyber operations. We now live in a world where AI is here to stay and is a tool which will be used for good, but also by some for ill. A balance exists, albeit I suspect the attackers continue to have the upper hand.

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply