Login with your Social Account

Majority of social media users are happy for their data to be used for research, study reveals

Majority of social media users are happy for their data to be used for research, study reveals

Social media users are generally positive about their personal data being used for research purposes, a study by the University of York has revealed.

Social media platforms have often been used by researchers to gather data on so-called “adverse events” from drugs and medical procedures, with adverse events often being under-reported in studies.

Social media users cited the potential benefit for medical research as the most influential factor for them to consent to their data being used for research.


However, the study revealed concerns around regulation and the ethics of using personal data for research purposes.

The qualitative study used interviews, virtual discussions and focus groups to explore views and attitudes towards the use of social media to monitor adverse events.

Some of those taking part had suffered adverse reactions to medicines themselves.


Lead author Dr Su Golder, NIHR Postdoctoral Research Fellow from the University of York’s Department of Health Sciences said: “We found it interesting that social media users were happy for their data to be used for research, but as researchers it’s important to take into account their concerns and make sure we assure people that their data will be used appropriately and safely.

“It is clear that social media users are in favour of some sort of overarching guidance for all institutions to follow and that further work is required to establish when consent is required for individual’s social media data to be used.”

Dr Golder said researchers were already aware of the huge potential benefits of using social media for research purposes.

Helping researchers

“It could be argued that some health scandals of the past could have been averted or discovered earlier if social media was around then as the adverse effects would probably have been highlighted,” she added.

“Social media has a part to play in helping researchers and our study has revealed that people are willing for it to be used under the right circumstances.

“Our findings will not only help direct future research but will also provide people managing social media websites, universities, ethics boards, pharma companies and policymakers with evidence to inform policy and guidance on the use of social media data for research. “

facebook security

Facebook paid contractors to transcribe audio chats without informing users

Facebook has paid several external contractors for transcribing the audio clips of its users as per people having knowledge of the work. The contract employees are not informed about the source of the audio recordings or the manner in which it has been obtained. They were simply asked to transcribe them. The people who have requested anonymity said that they heard the conversations of Facebook users but they have no knowledge why Facebook needs the transcription.

Irish Data Protection Commission is currently examining the activity in case it violated the data privacy laws of the EU. This was observed with a fall in the Facebook shares by 1.3% in pre-market trading. Facebook confirmed that it had been transcribing audio from its users adding that it will not continue the activity. It said that affected users chose the option to have their voice chats transcribed in its Messenger app. Contractors checked if the AI of Facebook was able to properly interpret the messages. 

Other tech companies such as Amazon and Apple have also come under the radar for collecting audio from the users and further reviewing them. It is against the privacy of the users. It was reported by Bloomberg that Amazon maintained a team of workers across the globe to listen to Alexa audio requests for supposedly improving the software. A similar practice was maintained by Apple and Google, although they declared that the practice was stopped and Amazon announced that it will allow users to opt-out from the service.

Facebook has denied for a long time that it did not access the microphones of users for informing ads or controlling the news feeds. Mark Zuckerberg directly denied it in his Congressional testimony. But it acknowledged that the microphones of those users are accessed who provide permission for a specific feature such as recording in voice messages. No information was provided on what happens to the information later on. However, it is interesting to note that the social media giant just completed a 5 billion dollar settlement with the US Federal Trade Practices in a probe of its privacy practices.

Some contractors involved with the transcription services feel their work is not ethical as users are not informed that their audio might be reviewed by third party users. One of the firms involved is TaskUs Inc, based in California. Facebook is one of its important clients, however, employees are not allowed to publicly disclose it, who refer to the client by the code Prism. TaskUs also reviews content that may violate Facebook policies while also reviewing political ads and election activities. It was instructed by Facebook to pause on the transcription task a week earlier.

Facebook data use policy that was reviewed last year has no mention of audio. Instead, it states to collect content, communications and other information provided when users communicate with each other. There is no mention that the content will be reviewed by other human beings while it declares that its systems process the content automatically for context. Transcription teams are not mentioned with only a vague mention of vendors supporting the business.

Machines are able to recognize audio but have problems in some unfamiliar cases. It is a point of concern that the moderators have often found disturbing content on the largest social networking site of the world.

This Simple Online Game Could Work Like a 'Vaccine' Against Fake News

A simple online game works like a vaccine against fake news

Several researchers have been finding out a way to stop the spreading of fake news, so they developed an online role-playing game. In February 2018, researchers from the University of Cambridge helped to launch the browser game called Bad News. Till now, thousands of people spent 15 minutes to complete it and many allowed the data to be used for the study. The simulation stoked anger and fear in players by manipulation of news and social media within the simulation. The game has shown positive results.

Dr. Sander van der Linden, director of Cambridge Social Decision Making Lab said that fake news spread quicker and easier than the truth and so fighting against it might be like losing a battle.  He also added that they wanted to see if they can make people identify between a hoax and real news by introducing them to a weaker dose of techniques used to generate false information. This is known as inoculation theory in the psychological version and it acts like a psychological vaccine. 

To measure the effects of the game the players were told to rate the reliability of a series of different headlines and tweets before and after a game.  The headlines had a random mixture of real and fake news. In a study paper published in Palgrave Communications showed that the perceived reliability of fake news has reduced to 21% after playing the game and it also showed that the people who were most likely to be influenced by fake news have been benefited a lot by this theory. 

Van said that playing the game for just 15 minutes has an average effect but if seen practically then it scales to thousands of people. A study co-author Jon Roozenbeek from Cambridge University said that they are shifting their target from ideas to tactics and by doing this they are hoping to create a general vaccine against fake news instead of trying to put different opinion specifying the falsehood. 

This game has attracted much attention and by working with UK Foreign Office the team has translated it into different languages which include German, Serbian, Polish and Greek. WhatsApp also commissioned the team to create this game for their platform. 

The researchers have also created a junior version of this game and this game had limitations but was equally advantageous too among each age group. The players have to earn six badges in the game which reflects a common strategy used by sellers of fake news but due to limited bandwidth, the questions measuring the effects were reduced to four badges. Roozenbeek added that their platform offers early proof to start building protection against duplicity by training people to deal with the techniques that promote fake news.