"> AI could be trained to be naughty » Socitm
Don't miss out!

President's Week 2020 June 8 - 12

Become a Socitm member today

Join a vibrant community of digital leaders who share a passion for transforming local, regional and national public services.

Find Out More

Become a member  

Home » AI could be trained to be naughty

AI could be trained to be naughty

Spread the love

AIArtificial intelligence (AI) could be covertly trained to misbehave, a new research paper has warned.

Confirming your luddite friends’ worst fears, a group of scientists from New York University discovered that AI systems can be corrupted by tampering with their training data, whether by jokers or worse. Apparently, such attacks are difficult to detect and could be used to create accidents.

With firms’ AI systems needing huge amounts of data for training, many are outsourcing the work to bigger companies like Google and Amazon – which, the researchers warn, could create security problems.

The paper explores the concept of a ‘backdoored neural network, or BadNet’, an attack scenario in which ‘the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor’.

The paper continues: ‘The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.’

And here is the research team’s dense, esoteric report. I’ll buy a drink for anyone who gets through it.

We put out a less terrifying briefing on AI in the public sector earlier in the year, which you can read here.

 

 

Become a Socitm Member today