May 19, 2020

How Australian CISOs and CIOs should frame cybersecurity conversations with the board

CIO
Cybersecurity
CISO
Armando Dacal
3 min
How Australian CISOs and CIOs should frame cybersecurity conversations with the board

Australian companies are embracing cloud but many don’t seem to fully understand what they need to do to ensure all their data and workloads in the cloud are secure. Anecdotally, it seems companies are either trusting their cloud provider to manage security for them or else they’re avoiding the cloud altogether due to security fears. 

Neither approach is ideal. The cloud works on a shared responsibility model. That means your cloud provider is responsible for securing the infrastructure, while your business is responsible for securing your data. However, you shouldn’t let security fears scare your organisation away from leveraging the significant benefits that cloud can offer. 

Businesses that get security right generally do so because security is prioritised in the business, rather than treated as an afterthought. Educating business leaders about the importance of strong cybersecurity falls to the CIO or CISO in most cases. This responsibility shouldn’t be overlooked because a strong security posture starts at the top of the organisation. Business leaders can’t expect their staff to prioritise security if they haven’t set the agenda at the highest levels. 

See also:

This makes it imperative for CISOs to insert themselves into the security conversation at the board level. 

There are four key things that CISOs should communicate to the board:

1. The cloud is just another risk

Many board members, especially those who aren’t necessarily technology-savvy, will switch off during conversations about cybersecurity. It’s therefore important to frame the conversation in the context of business risk. Helping board members visualise the reality and potential severity of security risks can go a long way towards getting their attention. Board members should treat cloud and cyber risk the same way they’d treat any other risk; by identifying the gaps and mitigating them to the extent that’s possible.

2. Native public cloud security isn’t enough

Just because cloud providers have some security natively built in doesn’t mean businesses can ignore their responsibility to secure their own data. The shared security model means data in the cloud is only as secure as data stored anywhere else in the organisation. It’s therefore essential to put additional, specific security measures in place to protect data in the cloud. Furthermore, for best results it’s important to thoroughly integrate your cloud security measures with the rest of your security architecture, and automate security processes wherever possible. 

3. Cloud security is no different from other cybersecurity

Far from requiring a different type of security, cloud deployments become more secure when you apply a consistent approach to managing security across the entire enterprise, regardless of where information or applications reside. Managing and orchestrating multiple security approaches and products only adds complexity to the security environment, which creates space for errors and risks. You should therefore highlight the importance of a consistent, strategic approach to cybersecurity as a whole. 

4. Preventing breaches includes securing the cloud

Preventing breaches from happening is the ideal scenario when it comes to cybersecurity. While it is possible to mitigate the effects of an attack, a successful attack will inevitably cause some damage to the organisation. That damage can be financial, reputational, and even legal now that the government has enacted the mandatory notifiable data breaches (NDB) legislation. To prevent attacks from being successful, you need consistent visibility and protections across the entire business network, regardless of what this includes. Any security investment should be considered on the basis of its ability to stop attackers in their tracks; that’s how it will provide return on investment. 

CIOs and CISOs who can articulate these premises clearly and compellingly to the board will find it easier to get executive buy-in for security projects, gain high-level support for a strong security culture, and, ultimately, be able to protect the organisation more effectively. 

Armando Dacal, Vice President of Strategic Alliances and Global Accounts, Palo Alto Networks

Share article

Jun 17, 2021

Chinese Firm Taigusys Launches Emotion-Recognition System

Taigusys
China
huawei
AI
3 min
Critics claim that new AI emotion-recognition platforms like Taigusys could infringe on Chinese citizens’ rights ─ Taigusys disagrees

In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’. 

 

Who Will Use the Data? 

As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.

 

It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens. 

 

How Does It Work? 

Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse. 

 

Can We Really Detect Human Emotions? 

This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state. 

 

In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others. 

 

Is China the Only One? 

Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous. 

 

‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU)

 

Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’. 

 

Share article