Apparently Facebook isn't so secure after all

In the face of scathing criticism about privacy concerns, the world's biggest social network Facebook has ratcheted up its privacy controls, allowing users to get a better handle on what exactly they share and what they don't share.

I. Breaching Facebook With Softbots

But a new research study by researchers at Canada's University of British Columbia exposes a major new security problem for Facebook -- "socialbots".  "Socialbot" is a term for a software AI agent (a so called "softbot") that operates a bogus account.  These agents can be employed to infiltrate social networks, posing as humans in order to mine hapless users' data.
Socialbots can be controlled by a central botmaster to harvest user data. [Source: U of BC]

In the U of BC study, the researchers used online tools to utterly defeat Facebook's CAPTCHA registration safeguards, which are supposed to prevent softbots from creating accounts.  In the end only 20 percent of the fake accounts were detected, and that was because alert users noticed them behaving oddly and reported them.  A whopping 80 percent of the accounts were never discovered to be bots.

The researchers cleverly took pictures from the social site "Hot or Not", in order to make the bots look like an attractive male or female.  Photos from users who were highly rated were selected to build the bots online persona.

Hot or Not
The softbots take their profile pictures from highly rated users of the site "Hot or Not".

II. Bots Make Friends and Steal Things

Interestingly despite their stunning good looks, only 20 percent of the bots' requests were accepted.  Good looking female bots without many friends were more likely than the male bots to be accepted.

But as the bots accumulated friends, both the males and females saw their acceptance rates soar up to 30-45 percent.  Male bots actually saw higher acceptance rates than female bots, once they had established a large friend base.  The higher acceptance rate turned out to be correlated to friend requests to users whose friends had already accepted the bot.  A "second pass" friend request (where the user and bot share friends) had up to a 60 percent acceptance rate.
Social bot acceptance
Socialbots with more friends had a higher rate of acceptance for friend request as they shared friends with users. [Source: U of BC]

On average, over the eight week (two month) study, the 102 softbots averaged 20 friends a piece.  Some social bot-terflies managed up to 80 to 90 friends in that brief time.

The bots avoided detection by posting quotes from public API.  The quotes allowed the bots to appear like literate, legitimate air breathers.

Except the bots were quietly stealing away information from the users, including emails, phone numbers, and private details.  The bots managed to grab 175 pieces of such data, on average, a day.  By the end of the study the researchers had amassed 250 GB of private data (properly encrypted for public protection, of course), which they deleted after summarizing their results.
Private data
The social bots absconded a wealth of data. [Source: U of BC]

By the end of the study the bots had access to 3,055 friends (of 8,570 total friend requests) and an extended network of 1,085,785 friends-of-friends many of whose profiles were partially visible, even if they had set their profiles not to be searchable online.

III. Facebook and Authors Don't See Eye to Eye

Facebook clearly wasn't a fan of the study and questions its methodology.  Its spokesperson comments to All Facebook:

We use a combination of three systems here to combat attacks like this — friend request and fake account classifiers, rate-limiting techniques, and anti-scraping technology. These classifiers block and disable inauthentic friend requests and fake accounts, while rate-limiting truncates the damage that can be done by any one entity. We are constantly updating these systems to improve their effectiveness and address new kinds of attacks. We use credible research as part of that process. We have serious concerns about the methodology of the research by the University of British Columbia and we will be putting these concerns to them.  In addition, as always, we encourage people to only connect with people they actually know and report any suspicious behavior they observe on the site.

However, the authors defend their work.  Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu write in their paper "The Socialbot Network: When Bots Socialise for Fame and Money":

We have evaluated how vulnerable online social networks are to a large-scale infiltration by a socialbot network. We used Facebook as a representative online social network, and found that using bots that mimic real users is effective in infiltrating Facebook on a large scale, especially when the users and the bots share mutual connections.

Moreover, such socialbots make it difficult for online social network security defenses, such as the Facebook Immune System, to detect or stop a socialbot network as it operates. Unfortunately, this has resulted in alarming privacy breaches and serious implications on other socially-informed software systems. We believe that large-scale infiltration in online social networks is only one of many future cyber threats, and defending against such threats is the first step towards maintaining a safer social web for millions of active web users.

It's not surprising that Facebook rejected the results.  They clearly seem to indicate softbots to be a serious and difficult threat to user privacy.  That's not only alarming to users, its a threat to Facebook's bottom line, which thrives off of steady use.

“So far we have not seen a single Android device that does not infringe on our patents." -- Microsoft General Counsel Brad Smith

Most Popular Articles

Copyright 2018 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki