car_dealership

Protect dealers and buyers on secondhand car platforms

Detecting fraud requires a combination of human expertise and technology.

car_dealership

Secondhand cars have seen considerable growth since 2020. With used-car retailers using digitalization to make their offerings more attractive on their platforms: from photos to video demonstrations, states Motor Intelligence.

With car dealers and private sellers uploading content, it’s vital these platforms provide a safe and seamless journey for customers. This requires a balance of technology and human intervention to manage the journey at each step.

This paper looks at some of the pain points on secondhand car platforms, key industry insights, and how Webhelp can offer a comprehensive and game changing solution with experts in digital services.

Download insights

 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author

Webhelp Oneshot Technology CX

Discover the 7th edition of our OneShot magazine on Technology

Our 7th edition of the OneShot is here!

Download your OneShot Magazine

Webhelp Oneshot Technology Customer Experience Relation

“Let’s talk about the well-being of your customers and employees. Because well-being has become a central challenge for brands.
At Webhelp, we believe digital technology must be oriented around this axis. Technology can really make life easier, to the benefit of both women and men.
As you will discover in these pages, today there is a lot of evidence of its effectiveness – and not only in the context of the «maintaining of bonds» that we are going through.
There are also new avenues that deserve to be actively explored, And this is what we are doing, with and for you, as part of numerous experiments.
What is the goal of our Technology department? To make technology an ally, entirely to benefit the well-being of your customers and employees.
An exciting project!”

Discover through this 7th edition technological innovations that humanize customer relations, facilitating the work of our advisors, and always to the benefit of final customers.

You will also find testimonies and advice from experts: Massimo Dutti, Vattenfall, Samsung…

What are the latest technological trends that are worth a look?

What are the conditions for successful technology integration?

And let’s not forget Webhelp’s vision and ambition: transparency, security, data and, of course, the human touch.

Summary

  • A word – SXO
  • A number – Zero
  • Three opinions – Technologies that humanize the customer experience (Yan Noblot, Massimo Dutti, Vattenfall)
  • Some info – How Toyota operates predictive customization ?
  • A demo – Home: a place to live, a place to sell
  • A B-case – How Webhelp proposed and deployed an intelligent tool… to facilitate the work of Samsung Electronics advisors
  • A hashtag – #VideoChat
  • An offer – Telecats, the voice of the customer as a path to action
  • A meeting – the WorldSummit AI
  • A conversation – A weapon of seduction to re-enchant commerce in the city
  • A story – Lego : in what world are you playing?
  • A perspective – For efficient and benevolent technologies

SHARE

Protect dealers and buyers on classified ad platforms

Consumer content is instrumental in influencing both purchase decision making and the popularity of online businesses.

The trust and safety of users online is crucial in today’s digital world. Classified ads platforms like gumtree and craigslist are increasingly popular for users to publish ads to share or gain information, or sell unwanted, used and new items to generate an income. Therefore, the trust and safety of users on these platforms is significant.

To ensure users are provided with a safe and seamless journey, it requires a balance of technology and human intervention to manage content at each step.

This paper looks at some of the pain points in the classified ads space, highlighting the typical industry reactions and insights into how Webhelp can offer a comprehensive and game changing solution with expert content moderators.

Download insights

 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author

Protect your community of dealers and buyers on marketplace and e-commerce platforms

Managing content to optimize customer experience on e-commerce and marketplace platforms

Protecting users online is crucial for businesses. It’s imperative to have a safe and secure platform for a seamless experience, and provide customers with trustworthy content to engage throughout the customer journey.

Did you know: 67% of consumer’s fears towards the sharing economy are related to trust, and 73% of people are unlikely to return to a site if ads have poor descriptions?

This paper looks at some of the pain points in online marketplaces, highlighting how Webhelp can offer a comprehensive and game changing solution to ensure a smooth and efficient experience.

Download our insights to learn more and discover our solutions.


Learn more about our Digital Content Services

 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author
SHARE

AI content

Impact of AI on online content moderation

We have all heard about Artificial Intelligence (AI) and the numerous potentials impacts it will or already has on our daily lives.

Machine Learning through Data Annotation is teaching computers to recognize what we show them, what we say, and how to react accordingly.

When trained well, the impacts it could have on online Content Moderation seem quite straightforward at first. Nonetheless, we will see that AI brings opportunities in the field as well as new challenges, not forgetting that we are only witnessing its genesis – there is still great room for improvement.

Implementing the process, but not totally developed yet

Virtually, AI seems to be a no-brainer as it will take the hit on the most sensitive contents. It will work as a fully impartial chooser instead of moderators having to approve or deny harmful posts.

This is currently put into practice within Webhelp – thanks to our in-house technology handling a growing part of the incoming User-Generated Contents, and attributing priority levels for moderators to take care of the most urgent ones first.

We have established that if AI obtains total control over what can appear on the internet, it will start to get messy very quickly. 2020 pushed tech giants to send workers home and to rely on algorithms to moderate their platforms. As soon as this happened, issues were observed across the two extremes. In fact, on Twitter, there was a steep increase of 40% of hate speech in France, while Facebook and Google both doubled the number of pieces of content flagged as potentially harmful material from Q1 to Q2.

Several examples of artificially intelligent moderators failing their tasks have been observed as not being able to understand human expressions in the first instance, such as irony, sarcasm, or more striking and unambiguously harmful words, however when they are put into context they reveal to be harmless.

This happened over a live chess game on YouTube which has been taken down due to hate speech, but only chess strategy was talked about. The limitations Artificial Intelligence encounters start to fade away as researchers from the University of Sheffield are starting to successfully integrate context in Natural Language Processing algorithms. This technology will be able to detect the differences of languages across communities, races, ethnicities, genders and sexualities, but as Ofcom says: “Developing and implementing an effective content moderation system takes time, effort and finance, each of which may be a constraint on a rapidly growing platform in a competitive marketplace”.

Beneficial in fighting discrimination and derogatory speech online

Following an objective of moderating online content solely through Artificial Intelligence, several start-ups are arising in the market with ever-improving AI-driven solutions. Bodyguard is a great example of this new generation of players implementing technology fighting hate speech and other ailments. The platforms themselves have started developing their own tools: Pinterest unveiled AI that powers its Content Moderation and highlighted its benefits since its implementation: over a year, non-compliant reports have declined by 52% and self-harm content by 80% in the past two years. As we already mentioned, the quality and the quantity of labelled data is key -Facebook, thanks to 1 billion Instagram photos, has also succeeded in developing an innovative image-recognition AI system aiming at moderating the platform almost instantly. As it has just been launched, we are not able to appreciate SEER’s (SElf-supERvised) direct effects on the platform yet.

Watching out for the deepfakes

While these new technologies have potential for positive impact on Content Moderation, they have also created new challenges which plenty of us have already come across, growingly without even noticing it: deepfakes. When analyzing the credibility of content sources, AI can more easily recognize a bot that would be used by malicious users to amplify disinformation, and we can reasonably assume that it would do so for AI-created deepfakes. This issue is way more difficult to detect for the human eye, but appropriately trained moderators, supported by the right AI-driven tools is the perfect combination to complement purely automated or purely human moderation, quickly and effectively.

The first big reveal when it comes to this technology is Microsoft’s deepfake detection tool which has been trained on over 1,000 deepfake video sequences from a public dataset, in a similar manner Facebook has trained its moderation AI. Disruptors also enter the market: platforms like Sensitivity.ai are specialized in detecting face-swaps and other deepfakes which can have deep impacts on the political scene for instance. In fact, the most famous and recent example of deepfake was the face swap of Tom Cruise on Chris Ume’s body and which effect was that it impressed a consequent part of the internet and went viral. When applied to politic speeches, debates or else from official, the impacts could be way more considerable.

AI is not the silver bullet – there’s still room for improvement

Artificial Intelligence is a solution for greater accuracy and efficiency in Content Moderation. Nonetheless, it must not be forgotten that there is still huge room for improvement, as well as growing challenges because of its development for malicious purposes. It is important for any social platform and online community to appreciate how central Artificial Intelligence is becoming in the Moderation field, as both a threat and an opportunity.

Reacting accordingly by getting the right combination of human moderators and technological solutions is in fact needed, as the possibility the impacts on real life and brand image it could generate might rapidly become overwhelming.

 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author
SHARE

Job platform match: attracting companies and job seekers using content management & moderation solutions

If the job-matching process is smooth, companies will trust your platform and so will job-seekers.

The trust and safety of users online is crucial in today’s digital world. As 2020 shifted society to online and seek for jobs all across the internet, user-generated content is fast becoming a powerful and flexible tool to enhance job-matching capabilities and attract users to these job ads platforms.

This paper looks at some of the pain points in the the job ads space, highlighting how Webhelp can offer a comprehensive and game changing solution to ensure a smooth and efficient experience.

Download insights

Stricter content moderation policies puts pressure on social media platforms

Growing concerns towards content moderation policies aroused over the last few years due to scandals, users, and politics. Subsequently, there have been growing concerns about Content Moderation (CoMo) policies and enforcement on social media platforms. Today, we are dealing with how world-renowned social media platforms enforce their Content Moderation policies, as opposed to  how governments or institutions desire them to (See Article). Bear in mind that in most countries, these platforms are not immediately liable for their User-Generated-Content (UGC); Section 230 of the Communications Decency Act of 1996 in the United States is a great example of this ’liability shield’:

When it comes to terrorism or other threats to users, several countries like members of the EU, Brazil and Australia impose a time limit for platforms to delete content after it has been flagged as inappropriate.

With platforms not immediately liable for their User Generated Content, why are huge corporations enforcing stricter policies, raising censorship concerns ? Why don’t they just leave their communities to live freely? They need to for two main reasons:

  1. To protect their communities from violence, hate speech, racism, harassment and so many more threats to ensure a smooth user experience
  2. Protect individuals on their platforms from being influenced by other users who might spread misleading information or leverage the network to influence their decisions and behaviors

But as you will discover in this article, some platforms endure growing scrutiny from their audience due to their huge reach, whilst others might benefit from different levels of acceptance to convey a somewhat brand image.

Scrutiny makes social media leaders tighten their CoMo policies

The former is the case, especially for both Facebook and Twitter. Their billions of daily users have the ability to influence mass opinions – far more easily than any other type of media. Following several scandals, trust between these platforms and their users has been damaged (link to WH article). In fact, when interrogated in a hearing by the US Senate last October, leaders of Twitter, Google, and Facebook were pointed out as “Powerful arbiters of truth”, a rather explicit denomination.

Content Moderation has wide political implications. Last year’s American elections played out a bigger trial for large tech platforms showing how they were able to monitor the peaks of political comments, ads, and other UGC, safely and considerately. Numerous examples of Content Moderation can be cited as no political ads on Facebook: first flagging Donald Trump’s tweets as misleading or partly false before permanently banning the former US president on both platforms.

TikTok has also been questioned several times regarding their moderation of political content, but most importantly almost live suicides, paedophilia, and increased usage of guns in the videos were posted by their users. Further to political aspects, the reasons why these types of content should be deleted and not seen by the communities is straightforward. When it comes to firearm usage, local laws make it even more unclear on how to moderate the use and applications of these types of weapons online.

Logically, the pattern rubs off on smaller players

Most Big Tech giants have now funded Safety Advisory Councils generally – “made up of external experts on child safety, mental health and extremism”, signaling to their communities that they are trying their best to protect them while avoiding censorship and audience attrition.

Due to the attention their bigger peers face, targets of the proposed tighter Content Moderation policies are progressing towards them. Platforms such as Parler advocate free speech and use it to promote their brand image, while welcoming the most extreme far-right supporters, whose comments are widely moderated on Twitter and other mainstream social

After Parler was banned from most well-known online app stores (Amazon, Apple, Google, who are the main providers of these apps) due to its lack of Content Moderation, it was forced to go offline and its now-former CEO, John Matze, has been fired over his push for stronger moderation. There are several other social media platforms claiming to promote free speech (Telegram, Gab), but some have chosen bravely to take on the Content Moderation challenge to avoid Parler’s faith.

Nonetheless, such patterns are already observed for new and innovative social media, including Substack (newsletter developing platform) and the infamous Clubhouse (live audio conferences). The former was not expecting such controversy about one of their newsletters until one of its previous releases linked IQ to race. The latter poses new questions on how to efficiently moderate live audio feeds.

Mastering Content Moderation policies is the key to success

The scale of emerging social media platforms, as well as their innovative format and technology imposes new challenges on Content Moderation, which is evidently highlighted by increased scrutiny from users. Unfortunately, without benefiting from years of experience in Content Moderation, newcomers, and smaller players find that their policy is adapted to their own targeted communities, as well as their content. If both areas are too permissive or restrictive, they become dangerous for their longevity and brand image.

Mastering Content Moderation enforcement is a lever to the welfare of your community and reputation.


 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author

Digital content significance on second-hand car platforms

Revealing why consumer content is instrumental in influencing both purchase decision making and in the uptake, visibility and popularity of brands online.

The trust and safety of users online is crucial in today’s digital world. Technology is disrupting businesses, so to ensure customers are provided with a safe and painless journey, it requires a balance of technology and human intervention.

This paper looks at some of the pain points in the the automotive industry space, highlighting the typical industry reactions and insights into how Webhelp can offer a comprehensive and game changing solution.

Download insights

Read the 6th edition of our OneShot magazine on Social Engagement

Our 6th edition of the OneShot is here!

Download your OneShot Magazine

Tick tock tick tock…

Time is ticking away – now is the time to start focusing on social engagement.

Social commitment means becoming aware, but above all, taking action and standing up for inequalities.

Taking action can be as simple as these recipes to be: more human, more green, and more equal. Not only are these good for you, but for others too.

Compelling your company to pledge and commit in the fight for social and environmental changes, such as the global warming crisis or social justices and equalities – are vital steps to take now for a brighter future.

And it all starts with knowledge. So, here’s to your learning with the latest edition of the OneShot.

Dare to be ‘woke’ and be a driving force for change?

SHARE

Taking a human centred approach to cyber security

In response to the evolving cyber challenge in the post-COVID-19 landscape, James Allen, Chief Risk & Technology Officer for the Webhelp UK Region, considers the way that risk in customer service has evolved and reveals the steps Webhelp has taken to protect its clients and people, with a human centred approach to cyber security.

The humanitarian crisis brought by COVID-19 undoubtedly caused rapid and universal disruption to businesses across the global stage; impacting economies, and leaving some companies struggling to maintain business continuity, whilst increasingly vulnerable to unscrupulous cyber criminals.

In fact, the Council of Europe’s Cybercrime division has reported evidence of a substantial rise in malicious activity (specific to the topic of COVID-19) in areas like phishing, malware, ransomware, infrastructure attacks, targeting teleworking employees to gain system access, fraud schemes (fake medicines and goods), misinformation and fake news.

In July, Action Fraud, the UK’s national reporting centre for fraud and cybercrime, published that victims had already lost over £11 million to COVID-19 related scams.

Consequently, the pandemic has put an intense spotlight on personal cyber practices, especially as working from home (without proper measures) can create more risk than the traditional controlled office environment. Similarly, Tech Republic reported that, from phishing attacks to malware, 71% of security professionals have been recording increased security threats or attacks since the COVID-19 outbreak, and as a result many countries and companies have been spurred into rapid action.

In the UK more than 80 coronavirus-related phishing and scam websites were taken down in just one day after the UK’s National Cyber Security Centre (NCSC) asked for the public to report suspicious emails. Existing takedown services, in one month alone, removed more than 2,000 online scams related to coronavirus, including 471 fake online shops, 555 malware distribution sites, 200 phishing sites and 832 advance-fee frauds. NCSC chief executive officer Ciaran Martin believes that the rise in technology use is making online safety more critical, saying:

 “Technology is helping us cope with the coronavirus crisis and will play a role helping us out of it – but that means cybersecurity is more important than ever,”   Source: Zdnet.com

 And, according to PWC, 80% of UK CEOs are concerned about the risk of cyber threats to their business, it is the issue they are most worried about, above skills (79%) and the speed of technological change (75%).

Revealingly, just under half of UK CEOs (48.4%) have taken some action regarding their own personal digital behaviour, including deleting social media or requesting a company to delete their data.

This is a worrying trend, which was noticeable even prior to the current crisis, as (according to the U.S. Securities and Exchange Commission) 2019 saw a 350% increase in ransomware attacks, a 250% increase in spoofing or business email compromise (BEC) attacks and a 70% increase in spear-phishing attacks in companies overall.

Furthermore, the average cost of a cyber-data breach rose from $4.9 million in 2017 to $7.5 million in 2018. Likewise, worldwide spending on cyber security increased by over 20% during 2017-2019 ($101Bn – $124Bn) and inevitably these costs will continue to rise, but without addressing the human behaviours contributing to this trend, much of this investment could be wasted.

And behaviour change is the key, as research firm Proofpoint revealed that a staggering 99% of threats observed relied on human interaction like enabling a macro, opening a file, following a link, or opening a document – highlighting the role of social engineering in enabling successful attacks, and the importance of knowledge as the top factor for prevention.

A recent FirstData study revealed that 60% of individuals are currently concerned about online security, and feel the need to do more to protect themselves. But information on how to do this is clearly absent, as over a quarter of those asked were entirely uninformed about the subject.

We know that the pandemic has led to record numbers of individuals now working from home – often without prior knowledge and experience of safe remote working practices and the potential security risks.  And, this situation is complicated by the fact that too often companies publish complex security policies, which are difficult to understand for the regular user.

As a people-first company, Webhelp is committed to a human centred approach to Cyber Security, aiming to provide all our people with the essential skills to keep them and their families safe online.

From the start it was clear that education was critical to delivering this goal. We recognised a need for clear and simple guidelines, put forward in an engaging and easy to follow manner, to help employees gain insight and confidence in recognising and protecting themselves against potential scams and take action when approaching cyber security.

So, in 2020 we launched our Cyber Super Heroes Campaign, designed to make complex security advice simple and accessible to all colleagues. This campaign talked to these issues in a humorous yet informative voice, and our activity has accelerated to support our colleagues through a time when cyber threats were increasing.

Focusing on a different topic every fortnight, guidance has been delivered across multiple channels including on site, email, social media, the employee intranet, desktops and screen savers and by using digital animations and posters.

Our people were also given the opportunity to get involved by becoming a Webhelp Cyber Superhero, through signing up for in-depth additional information to better champion the cause to their teammates and families.

The campaign has covered a full spectrum of cyber security topics including:

  • Phishing
  • Safe Passwords
  • Physical Security, both at work and at home
  • Keeping safe online
  • Social Engineering
  • Malware
  • Social Media
  • Keeping kids safe online
  • Safe Online Banking
  • Keeping your devices secure when you’re out and about
  • Cookies

Finally, to add a truly human face to our campaign, personal stories from volunteers in our business were shared. Colleagues were extremely keen to highlight their experiences and offered heartfelt advice to their colleagues, with the goal of really delivering a relatable message that Cyber scams can and do happen, and that together we can make our online activity safer, both in our workplaces and in our homes.

However, the work doesn’t stop there as Head of Cyber & Privacy for Webhelp UK Region, Chris Underhill, explains:

“The cyber threat landscape is constantly evolving, requiring businesses to monitor threats, adapt to change and deal with incidents swiftly. As part of my new role in Webhelp, I will be supporting our international teams and clients with cutting edge cyber intelligence, training, technology and consultancy services that not only help secure organisations against a growing number of threats, but also provide professional, certified level assurance to help secure business as usual against a backdrop of regulation, uncertain times and new working conditions.

 It’s clear that threats facing businesses extend well beyond the network perimeter and a move towards a new ‘human centric’ approach to cyber security is required to protect critical assets from compromise. Webhelp are committed to supporting our teams and clients using the very best in technology and educational programmes that will provide a robust suite of solutions across the industry.

Agility and innovation in risk has been crucial to managing the pace of change during the pandemic, so despite the challenges brought by COVID, fear must not stand in the way of progress. This is something that will be explored further in a forthcoming blog for the #servicereimagined series.

 

To discover more about customer service models post COVID-19 read our new Whitepaper, a joint publication with Gobeyond Partners, part of the Webhelp group, on Reimagining service for the new world which is underpinned by our unique industry perspective alongside new research to discover the operating models of the future.