WhatsApp Attempts to Limit Fake News by Restricting Forwards, After Mob Killings
WhatsApp Attempts to Limit Fake News by Restricting Forwards, After Mob Killings
However, the platform would be helpless if someone decided to copy and paste the same content in new chats and group conversations.

As the Facebook owned WhatsApp continues to battle the spread of fake news on its instant messaging platform, WhatsApp has announced that it is adding limits on the number of times any user can forward a particular message on the app. At present, WhatsApp is testing the limit of five chats, which once crossed, will lead to the forwarding option disabled for that particular message. The forwarding limit will be applicable on text, images and video forwards.

This development comes a day after the Government of India sent a second notice to WhatsApp, asking it to come up with concrete measures to tackle the spread of fake news via the app. The Government wants “accountability and facilitate enforcement of law" in the wake of rising incidents of rumours communicated via WhatsApp, triggering lynching incidents in the country, and is looking for solutions that can help law enforcement agencies. This move came after the Supreme Court had also recently taken note of the increasing cases of lynching across the country, and had asked the parliament to come up with special laws to deter such crimes.

“Today, we're launching a test to limit forwarding that will apply to everyone using WhatsApp. In India - where people forward more messages, photos, and videos than any other country in the world - we'll also test a lower limit of 5 chats at once and we'll remove the quick forward button next to media messages,” says WhatsApp, in an official statement.

WhatsApp has already started labelling all forwarded messages as ‘Forwarded’, to separate them from originally created messages, in an attempt to make the recipient more aware towards a forwarded message and to let them know that the contents were not originally written by the sender in question.

Incidentally, there is very little WhatsApp can do if a user does find a way to circumvent these newly imposed limits on forwarded messages. For instance, WhatsApp can do nothing is a user decides to copy or paste the text for potentially inflammatory or violence inciting messages, and manually pastes them in new chats. The same can be done for media, including photos and videos—they can be downloaded to a mobile phone or PC, and sent to new chat recipients one by one.

Also read: WhatsApp Gets Second Notice From Government, But Can it Really Tackle Fake News?

This is where the limitations of an instant messaging app platform do show up, because there is precious little they can do—any further restrictions in sending or receiving messages will hamper the ability of genuine users as well while using the app.

It is important to note that it’ll be difficult to detect and flag fake, violence inciting and similarly dangerous content being shared on the platform. For content such as porn, child abuse, threats and racism for instance, detection methodology which includes keywords, phrases and image detection algorithms. Those filters aren’t necessarily applicable for fake news forwards at the moment. Earlier this month, WhatsApp had made its stand clear on the issue. The company, in an official statement had said that governments, technology companies and users need to work together to prevent the spread of fake news.

What's your reaction?

Comments

https://umorina.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!