Gartner: How CIOs can change IT management in 100 days
To tackle issues of digital business leadership, more CIOs are setting up an “office of the CIO” (OCIO) for support. And CIOs in Australia and New Zealand are no different.
An OCIO should comprise a small team that looks after how IT is managed across an organisation. The team, which should report to the CIO, should ensure that IT operations are well run, and should contribute to business growth through IT-enabled business innovation. This will give the CIO more time to focus on enterprisewide IT leadership.
Speaking ahead of his track at Gartner Symposium/ITxpo in Cape Town, managing vice president at Gartner, Lee Weldon said: “Whether you’re starting a new OCIO or reinvigorating an existing one, accurate preparation, assessment, planning, execution, measurement and, first and foremost, communication are paramount for success.”
Choosing the right team leader is vital, as this person needs to have credibility with both business and IT leaders, and be someone the CIO trusts. Each team member should be similarly business-minded, analytical, influential and proactive.
Weldon and Gartner lay out what the first 100 days of the office of the CIO should focus on
Days 1 to 30: Analyse, analyse, analyse…
In the beginning, the goal is to analyse and understand the issues that the OCIO team needs to address, and to establish a clear set of priorities. The team should ask questions such as the following: “What impact are we having on business outcomes or customer experiences, and what skills are core to our ability to differentiate as a business?”
The team should also assess, for example, IT strategy, IT metrics and IT spending, to obtain as much insight as possible.
“The insight provided by this analysis phase will help the team achieve ‘quick wins’ to show its value, and to help identify the IT organisation’s long-term priorities,” said Weldon.
Days 31 to 100: Focus on priorities, while achieving quick wins
The OCIO team needs to start this phase by ensuring the IT organisation is doing the right things, which include creating a strategy to identify priorities and refreshing the strategy process. Second, the OCIO needs to ensure the governance model is “fit for purpose,” that the IT organisation knows who the decision makers are, and that the right people are engaged in the right decision-making forums.
Next, ensure that the IT organisation is doing things right by focusing on managing and measuring its performance. The OCIO should ensure clear links to business outcomes and business value. It also needs to assess whether the IT organisation is set up optimally to deliver the desired business outcomes.
Throughout this phase, identify and secure quick wins. “Quick wins not only show how the OCIO can contribute to business outcomes, but also generate support and buy-in,” said Weldon. “For example, assuming a stronger role in facilitating governance, taking on a task from the CIO’s personal agenda, or preparing a monthly status report that gives the CIO and other IT leaders clear insight into the IT organisation’s opportunities and problems.”
Gartner analysts will discuss CIOs’ priorities in the digital era at Gartner Symposium/ITxpo 2016, September 26-28 in Cape Town, South Africa. Follow news and updates from the event on Twitter using #GartnerSYM.
Chinese Firm Taigusys Launches Emotion-Recognition System
In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’.
Who Will Use the Data?
As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.
It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens.
How Does It Work?
Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse.
Can We Really Detect Human Emotions?
This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state.
In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others.
Is China the Only One?
Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous.
‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU).
Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’.