Jun 1, 2021

Microsoft sets up APAC public sector cybersecurity council

Cybersecurity
Coalition
Microsoft
threatintelligence
Kate Birch
2 min
With APAC experiencing more ransomware attacks than elsewhere, Microsoft sets up council to accelerate public-private partnerships in cybersecurity

With the aim of building a strong and coordinated response against cyberattacks across Asia Pacific, Microsoft unveils the first APAC Public Sector Cyber Security Executive Council.

The aim of the council is to help fast-track public-private partnerships in cybersecurity, while further promoting a broader sharing of threat intelligence to be better positioned to respond in the event of attacks.

The council consists of 15 policy makers from seven APAC countries, including Brunei, Indonesia, Korea, Malaysia, Philippines, Singapore and Thailand, along with cybersecurity professionals from Microsoft.

Council to share threat intelligence

This strong coalition builds on existing efforts to strengthen cybersecurity partnerships in Asa-Pacific, including through the Asia-Pacific Economic Cooperation (APEC), Association of Southeast Asian Nations (ASEAN) and Global Forum on Cyber Expertise.

With a vision to build a community where threat intelligence, technology, and resources can be shared in a timely and open manner, the collective intelligence of the council will enable the cooperating nations to share best practices and strategies to overcome cybersecurity challenges.

“With similar threat landscapes, this partnership will ensure that we are steps ahead of the perpetrators, establishing higher standards for the cybersecurity eco-system as well,” says ChangHee Yun, Principle Researcher of AI/Future Strategy Center, National Information Society Agency Korea.

The forum will share best practices, learn from Microsoft security certification trainings, dedicated workshops, and hands-on lab sessions, with a goal of driving improvements to the digital skills of the workforce to reduce the talent gap in cybersecurity across the participating nations.

Members will share experiences and knowledge relating to cyber threats and will meet virtually every quarter in order to exchange information on cyber threats and cybersecurity solutions.

Asia Pacific has higher rate of cyber attacks

While cybercrime is an increasingly big problem worldwide, it is especially problematic across Asia Pacific, with the region continuing to experience a higher-than-average encounter rate for malware and ransomware attacks – 1.6 and 1.7 times higher, respectively, than the rest of the world, according to findings from Microsoft’s Security Endpoint Threat Report.

Developing countries such as Sri Lanka, India and Vietnam are most vulnerable to such threats, while malware and ransomware threat encounter rates in Japan, New Zealand and Australia were three to six times lower than the regional average. According Mary Jo Schrade, Assistant General Counsel, Microsoft Digital Crimes Unit, Microsoft Asia, “countries that have higher piracy rates and lower cyber hygiene tend to be more severely impacted by cyberthreats”.

And since the onset of the pandemic, the volume of successful attacks in outbreak-hit countries seems to be increasing, according to the report.

 

 

Share article

Jun 17, 2021

Chinese Firm Taigusys Launches Emotion-Recognition System

Taigusys
China
huawei
AI
3 min
Critics claim that new AI emotion-recognition platforms like Taigusys could infringe on Chinese citizens’ rights ─ Taigusys disagrees

In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’. 

 

Who Will Use the Data? 

As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.

 

It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens. 

 

How Does It Work? 

Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse. 

 

Can We Really Detect Human Emotions? 

This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state. 

 

In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others. 

 

Is China the Only One? 

Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous. 

 

‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU)

 

Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’. 

 

Share article