May 19, 2020

Cyber security: C-suite and IT teams disconnect making it easier for cyber criminals

human resources
Employment
executives
Cyber Security
Michael Shepherd
4 min
Cyber security: C-suite and IT teams disconnect making it easier for cyber criminals

Cyber security is a major concern for Australian businesses, no matter their size or industry, and having to deal with ransomware and DDoS attacks has unfortunately become business as usual, with detrimental consequences for both the bottom line and reputation. The recent Wannacry ransomware attack, affecting more than 150,000 machines across the globe, was an example of the devastating impact a cyber attack can have, with critical medical services affected in the United Kingdom and networks affected across the world.

According to our last Cyber Defence Monitor 2017 report, cyber security represents the most significant business challenge for 71 per cent of C-suite executives. 72 per cent of IT decision makers expect to be targeted by a cyber-attack over the next 12 months.

While 50 per cent of businesses plan to increase the time and resources spent on cyber security, executives and IT leaders can’t agree on who should be accountable for managing the budget or where it should be spent.  

The intelligence disconnect

There are still major gaps between how the executive suite and IT leaders perceive the issues and priorities attached to cyber security. For example, in the event of a successful attack, business leaders are more worried about sensitive information theft, loss of customer information and reputational damage, while IT decision-makers are more worried by Intellectual Property (IP) theft, fraud and business disruption.

Australian organisations can’t build an effective cyber security strategy if they are out of sync on their priorities. They also can’t properly protect the organisation’s most important assets if they are not aligned on who is responsible, or indeed what those assets are.

Just as concerning is the 77 per cent of C-suites and 93 per cent of IT decision-makers who don’t believe they have the skills they need to deal with a cyber attack.

A lack of common views on the important assets to protect and confidence in available skills means we need to work quickly to narrow these gaps in understanding, intelligence and responsibility.

Joining forces in re-defining security strategies and building threat intelligence

A diversity of opinion tied to common goals is a symptom of strength in an organisation. It’s clear that effective collaboration, communication and intelligence sharing are the bedrock on which effective defences will be built. IT and business teams don’t always communicate openly, directly or comprehensively.

It’s time business leaders stop pointing the finger at IT teams, and participate actively in securing their organisations. They are the ones overseeing the wider business, and it is their role to raise awareness about the cyber threat amongst all lines of businesses, so employees are better informed and less likely to be the source of a breach. 

They should also be intimately involved in deciding where the security budget should be spent, which includes considering outsourcing part of their security to industry experts, to benefit from economies of scale, specialist facilities, shared intelligence and the ability to call upon scarce skills that are in high demand.

Hiring the right IT skills has indeed been a real struggle for Australian organisations in the past few years. With the increased sophistication of threats, finding relevant security experts has become very challenging, and expensive. Instead of seeing this as a barrier, it should prompt organisations to make hiring one of their priorities, and start thinking about their future cyber skills requirements today, nurturing the talent required to ensure a thriving supply chain of skilled works and ideas to address this growing challenge into the future.

Finally businesses should also be open to sharing knowledge with peers, law enforcement, governments and IT security firms, to augment theirs and the industry’s defences against cybercrime. This is how Australian businesses will build threat intelligence engines that will benefit whole industries, by collating and analysing data from various sources into one common framework.

In an increasingly connected world, it is no longer possible for businesses and business units to remain siloed, and for leaders to be hands off on cyber defence. Without a common understanding of where the business is currently or the desired destination, and the means by which they’ll reach it, IT Decision Makers and C-suite executives risk wasting scarce resources and ending up in the spotlight, for all the wrong reasons.

 

Michael Shepherd is the Regional Managing Director for Australia and New Zealand (ANZ) at BAE Systems Applied Intelligence

Share article

Jun 17, 2021

Chinese Firm Taigusys Launches Emotion-Recognition System

Taigusys
China
huawei
AI
3 min
Critics claim that new AI emotion-recognition platforms like Taigusys could infringe on Chinese citizens’ rights ─ Taigusys disagrees

In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’. 

 

Who Will Use the Data? 

As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.

 

It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens. 

 

How Does It Work? 

Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse. 

 

Can We Really Detect Human Emotions? 

This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state. 

 

In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others. 

 

Is China the Only One? 

Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous. 

 

‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU)

 

Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’. 

 

Share article