Content Moderation On The Main Social Media Sites
As the sites have evolved and their applications for business have become clearer, new tools and facilities have been added to control and moderate content that is not posted by the business itself.
The importance of monitoring what is said is easily demonstrated - barely a week goes by without news of an internet publicity problem for a business.
These issues seem mostly to arise when some ill-advised advertising ties in with a newsworthy event.
In the days when complaints came by letter or telephone, very few would notice - but now that customer backlash can appear online, bad publicity can go viral in minutes.
It is clearly now more important than ever to 'engage brain before opening mouth' when making statements on behalf of a brand.
However, the open nature of social media means that brands leave themselves open to adverse comments, spam attacks and other problems if there is no control over what is posted, whether or not the brand has caused the problem itself.
So what facilities are available on the main sites to keep content under control, without stifling debate and feedback? Facebook provides some basic content moderation facilities for fan pages.
These include a two-level profanity filter which should filter out the most undesirable spam posts which can hit all sites.
The site also offers the ability to restrict the countries in which the page can be seen - either allowing a list of countries, or blocking a list.
Finally there is a simple moderation setting - this is either 'show all posts by default' or 'hide all posts by default', the latter pending approval by the page admins.
While a popular fan page may create a lot of work for the moderator, leaving a page with all posts showing can be fraught with risk.
At the very least, the profanity filter should be enabled.
The main way to provide content moderation on YouTube is to create a channel for the brand.
This allows the brand to have its own look and feel, to control what videos are posted and to control the comments that are made.
For the last, comments can be pre-moderated - this means that they will not appear until approved - or reactively moderated, in which case theywill appear by default.
This should be treated with caution as inappropriate comments will only be noticed if flagged.
Photo-sharing websites such as Flickr, Instagram and Pinterest tend to rely on users following their terms of use, which make the usual references to content which is 'legal, decent, honest and truthful' (as the British Advertising Standards Agency defines it).
As the definitions of these words are often very personal, the onus is very much on brands to monitor what is posted on their pages and to take appropriate action.
The sites have mechanisms to allow users to flag inappropriate content, and also for flagging particular accounts if there are repeated reports.
Brands need to take care to ensure that their accounts are not flagged as inappropriate.
It is worth noting that Pinterestis unusualin that its terms of use allow all content to be modified, as well as re-posted.
This means that brand logos, statements and similar may be changed and incorporated into other pins.
According to the terms of use of the site there is nothing to be done about this.
It could be said that it is impossible to have control of content on Pinterest, however it is possible to incorporate a 'no-pin code' into a website.
This will prevent the content from being pinned, and if there is a serious misuse of content then Pinterest has a complaints procedure.
There have not been any test cases as yet and any legal precedents are not proven.
The final major site to consider is Twitter.
Due to its ease of use and limited-length format, Twitter can be where news spreads fastest.
There are also ways in which Twitter itself can allow unwanted results.
For example, projecting a Twitter feed live at an event is now a common occurrence, allowing attendees to send their thoughts to everyone at the event, providing a talking point and entertainment.
However it is all too easy for this facility to become memorable for all the wrong reasons, especially if a large display board is used, or if the hashtag was released before the event and a campaign has been hijacked.
It has become apparent that at least a basic level of content moderation is essential for these feeds - un-moderated feeds are open to abuse, profanity, 'off-message' postings or worse.
Because Twitter has an 'open' programming interface,developers have provided add-ons, utilities and tools in abundance.
There are several free tools now available for moderating Twitter feeds, as well as paid-for tools with wider capability.