For the next few minutes, imagine you're a machine.
As a machine, you don't have any emotions, preferences, initiative, opinions or insights.
But you do have the ability to absorb new information at superhuman speeds, and you never forget a thing. For instance, you can tell the difference between cats and tigers because you've seen millions of both, and you remember each of these samples instantly and equally well.
Because you're such a quick learner that can store infinite amounts of information, you'd be perfect for predicting people's viewing behaviour on Netflix, increasing road safety by means of self driving cars or boosting medical science as an expert diagnostician.
But alas, in this example you're a machine in the hands of cyber criminals and you're programmed to steal people's data.
So how can we use a machine's learning capabilities to steal data?
How can Machine Learning and Artificial Intelligence help you in your data security? In this whitepaper "Good cop, bot cop: the truth about AI, ML and your privacy" we give you the answer. Download it now!
Guessing someone's password is hard if you've only ever seen a few passwords in your lifetime. But you're a machine, and you've seen (and memorised) millions. You know exactly how human beings go about cramming pet names, birthdays or anniversaries into a mix of upper- and lowercase letters, digits and special characters.
Had you been capable of feelings, you'd be astonished by how predictable human passwords are. So when a hacker commands you to find anyone's password, you engage in a game called trial and success in a minute or two.
A human being may need months or even years to guess a single password. For you it's child's play.
Read more: How strong passwords will keep your data secure
To hack someone is to know someone. Let's say a cyber criminal wants to hack the account of a CEO. A human being would have to spend weeks, months, maybe even years getting to know this person. It requires endless conversation, empathy and patience.
As a machine you can quickly scour the internet for all the information you need: birthdays, anniversaries, holidays, nights out, previous jobs. Combine that with all publicly available data of the company you want to hack - letterheads, logos, newsletters, etc. - and it's very easy to create a phishing email tailored specifically to one person.
A human being can do no better than pretending to be the son of the deposed king of Nigeria asking for help. A machine however can quickly and easily create a phishing email that has even the smartest people fooled.
As a machine, you simply gather all the available information on a person. You then use all historical data, plot the patterns and use those to predict what your target would respond to. All it takes is seconds.
Arguably the best thing about being a machine is that you never tire. It's not surprising that AI powered cyber attacks are becoming increasingly common. In the case of AI powered attacks, it's the machine doing the work of a hacker. And as a machine, you deliver each and every time.
You can be a bot leaving fake bad reviews on a competitor's website, a bot pretending to be human to reel in streaming revenue or to convince real people to vote a certain way. All a hacker needs to do is press 'enter' and the machine takes over from there.
The examples above are just a few of what you'd be capable of as a machine. The real list is endless and expanding every day. Before long, machines will be able to perfect deepfakes, create undetectable malware and there's even talk of using AI to hijack the bitcoin market or using face recognition technology to carry out assassinations.
On the other hand, you're a machine. You have no goal, no purpose and you don't pick sides. You simply see patterns and follow commands.
This means everything you learn can be used by anyone, not just hackers or cyber criminals, but also by the very people trying to fight them.
The result?
You, a machine, will always be stuck right in the middle. One day, you're a good bot, the other day you're a bad one. If a bad bot becomes more advanced, a good bot will follow suit. It's the cat-and-mouse game of Machine Learning where, ironically, it's impossible to tell who's who.
So what can real human beings do to differentiate the good bots from the bad ones?
Not much.
In another stroke of irony, we might end up fully depending on Machine Learning to help us distinguish good bots from bad ones.
If we were to empathise with machines, we might feel a bit sorry for them and the status quo they're stuck in: the better the machine, computer or algorithm, the bigger its impact on society, both good and bad.
Fortunately, there's a way for machines to become better without it having to be a double edged sword: it's possible to let the good bots win.
You see, the vast majority of data breaches are the result of simple human error. Surprisingly though, most Machine Learning applications aim to take human behaviour out of the equation, resulting in a bot battle that doesn't really benefit anyone.
While there's a lot of attention for AI and Machine Learning in the field of cyber security, it's all part of the cat-and-mouse game and completely ignores the fact data breaches happen because of human error. It's the Machine Learning lesson few people seem to be talking about.
As you've seen in this blog post and the previous one, Machine Learning can be applied to just about everything. The solution is not to create a good bot to outsmart bad bots, but to use Machine Learning to help us become better people.
Want to know more about how AI and Machine Learning should be used to actually prevent data breaches? Click on the link below to download our free whitepaper: "Good cop, bot cop: the truth about AI, ML and your privacy".