You’re summoned
to court where a ton of evidence including phones calls, handwritten notes,
emails and chat logs undeniably prove your complicity in a crime.
The problem is,
none of that evidence belongs to you. They have been masterfully crafted by
Artificial Intelligence algorithms that have meticulously profiled you.
Developments in
Artificial Intelligence are helping revolutionize many fields such as conversational
commerce. But the same technology can serve as a tool to invade privacy and
commit acts of fraud.
Here are some of
the ways fraudsters may put AI to ill use in the future.
Handwriting forgery
In the old days,
imitating hand writings and signatures was a feat that required skill and
practice. Not anymore, according to an AI algorithm developed by researchers at
University College London (UCL).
Titled “My Text
in your Handwriting,” the algorithm only needs a paragraph’s worth of script to
learn a person’s handwriting. It can then write any text in the person’s
handwriting. This is the most accurate replication of human script to date.
The innovation
has positive uses such as helping stroke victims formulate letters without the
concern of illegibility. It can also help in translating comic books while
preserving the author’s original writing style.
However, evil
actors can also take advantage of the technology. Given its accuracy, it can
become instrument in forging legal and financial documents—or maybe changing
history. The researchers were able to reproduce the handwriting of as Abraham
Lincoln, Frida Kahlo, and Arthur Conan Doyle.
The researchers
claim that forensics experts could tell the difference. But that will become
harder as the software develops and becomes more advanced.
Fake conversations
Chat bots are
finding their way into more and more domains. And thanks to artificial
intelligence, they’re succeeding in providing increasingly natural experiences.
But what happens when they become too real?
Last year,
messaging app company Luka created a chatbot that impersonated the cast of
HBO’s Silicon Valley. The app’s neural networks ingested the script from the
first two seasons of the show to learn the characters’ language patterns. It
then created bots that talked like the actual fictional characters.Two seasons’
worth of dialogue is not enough to create an efficient chatbot, but the idea
behind it was very real.
A few months
later, the company used the same technique to virtually bring the dead back to
life. By feeding the algorithm with a history of text messages, social media
conversations and other sources of information, Luka’s engineers succeeded in
creating a chatbot based on the company’s deceased co-founder.
Luka wants to
create bots that mimic real-life people. And with newer generations of people
creating even more digital content, that goal is becoming achievable. Such
chatbots can have some very productive uses — as long as they are within your
control.
But fraudsters
can put the same technique to malicious uses. For instance, spear phishers
usually spend weeks and months to learn and mimic the habits of their targets.
Will they use AI and machine learning as a shortcut?
Voice forgery
TNW recently ran
a report about Lyrebird, an AI company that synthesizes speech in anyone’s
voice with a one-minute recording. The samples published on the company’s
website are rather rudimentary.
Google’s Wavenet
provides a similar functionality. It requires a much bigger data set, but it
sounds eerily real. The technology behind it is, as you guessed it, neural
networks.
The point is,
the technology is advancing at an accelerating pace. And as Lyrebird’s founders
warn, copying the voice of someone else is possible and audio recordings might
no longer be a trusted source of evidence.
When put
together, voice, handwriting and conversation forgery can do an awful lot of
good — or evil. We might be heading toward an era where guarding your every bit
of data will become critical.
No comments:
Post a Comment