May 19, 2020

5 Reasons Why Information Security Departments Are Under-Resourced

Technology
Security
BYOD
IT department
David Owen, director of strate...
2 min
How To Optimise Data To Drive Customer Engagement With CRM Software

Everyone knows the moment new technology is purchases, it doesn’t take long for it to become outdated. As technology in all aspects of our lives continues to develop at a startling pace, businesses feel the pressure to become as up-to-date as possible. One of the biggest concerns many companies have is keeping their data secure as technology grows around their capabilities.

Because of this, the information security departments of many companies are under-resourced as the demands for cyber security support outstrips the organisation’s ability to provide the service. 

David Owen, the director of strategy and marketing for Asia Pacific and Middle East, BAE Systems Applied Intelligence, cites five reasons information security is feeling the pressure.

1. Sensitive Information Is Proliferating Outside The Corporate Moat 
Increasing numbers of employees are using non-sanctioned applications like Dropbox on work devices. As company information proliferates across a myriad of major cloud providers, business process outsourcing services, IT service providers and data analytics consultancies, security departments struggle to govern and track all of these. 

2. Security Is A 'Contraceptive' Business Case 
There is an indirect relationship between investments in security and positive business outcomes like higher revenue, greater market share or reduced costs. Therefore companies are often unwilling to invest in threat management as a priority over other potential business investments. This leaves the security department under-prepared for preventing and detecting cyber threats. 

3. The 'Protect Everything' Mindset 
Many organisations spend the vast majority of their resources on baseline controls to protect the entire organisation (often at the perimeter), rather than pinpointing controls towards the specific systems and business processes that relate to sensitive information (which often straddle the perimeter into the supply chain). 

4. Hardening Regulation Adds Costs 
Governments and regulators constantly ratchet-up regulation, and this regulation is sometimes wide-ranging in its impact. Yet security teams are often expected to accommodate the cost of new regulation within an existing budget baseline rather than assessing the true incremental cost of adoption. This can result in a focus on compliance over risk management. 

5. Cyber Security Isn't owned By The Wider Business 
In many organisations the prevailing view is that users should be able to just turn up at their computer without considering security. This mindset ignores the fact that an increased proportion of sophisticated security threats focus on persuading the user to take an active role in the compromise, and that staff, managers and leaders all own the problem. 

Share article

Jun 17, 2021

Chinese Firm Taigusys Launches Emotion-Recognition System

Taigusys
China
huawei
AI
3 min
Critics claim that new AI emotion-recognition platforms like Taigusys could infringe on Chinese citizens’ rights ─ Taigusys disagrees

In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’. 

 

Who Will Use the Data? 

As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.

 

It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens. 

 

How Does It Work? 

Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse. 

 

Can We Really Detect Human Emotions? 

This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state. 

 

In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others. 

 

Is China the Only One? 

Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous. 

 

‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU)

 

Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’. 

 

Share article