COGNITIVE INFRASTRUCTURE THAT HELPS YOU LEAP AHEAD IN ENTERPRISE AI

BRUCE LAI – IBM POWER SYSTEM LEADER OF BRAND MANAGEMENT ASIA PACIFIC

Bruce Lai, the guest speaker, believes that IBM has brought the brand-new supercomputer known as “Make you superhero, super powerful.

SUMMIT was selected as the highlight invention presented by IBM. The story behind was started at the first invented supercomputer in 2000. The team has developed the Model 8 in November 2014, under the contract from U.S. government. IBM co-researched together with Nvidia and Mellanox at that time. Comparing to the old version of super computer, they normally operated through 2 lanes, but our invention provided 4 lanes which transferred from one power to “open power”. And that has been developed to the version of Power 8; the required backbone processor for SUMMIT.

Power 8 Open Power was created and enriched with software ecosystem including Nvidia, Mellanox, Google, TYAN. The core processor, CAPI – the Coherent Accelerator Processor Interface, on IBM Power 8 systems is a new means for solution architects to gain system-level performance. CAPI connects a custom acceleration engine to the coherent fabric of the Power 8 chip. The hybrid solution has a simple programming paradigm while delivering performance well beyond today’s I/O attached acceleration engines.

However, we are stepping into Artificial Intelligence era with exponential growth of data. Many enterprises and organizations hire Chief Intelligence Officer (CIO) especially for the job that plays with data to get business insight for decision making in particular. We expected that 80 percent of staff will have access to deep learning and could have data ability in 2018.

AI requires more algorithm but blended skills and we still need data center as key area for AI deployment. IBM came up with a question “what if we could have super highway that provide more accurate and faster?” so the team developed that questions to cognitive system co-optimized hardware and software.

Power 9 was launched and introduced to the audience. Bruce went through the specification including 1TB/second BW into chip, 7TB/second on chip BW, 2 times faster PCle Gen4, and 8 billion transistors. Power 9 has AC922 that provide the best infrastructure to run AI as it superior data transfer multiple devices.

The highlight invention “SUMMIT”, it was researched by Oak Ridge National University where is specialized in research, science, as well as medical schools. SUMMIT was being used in many industries including fusion energy, combating cancer diagnosis, high energy physics data and identifying next gen materials.

IBM AI was selected to present out as technology ecosystem point of view. He noted that limited GPU is the obstacle of developing machine learning. Today, we have Google deploys Power9 and add more workloads on it, Paypal uses Power9 to operates millions of transactions especially detecting fraud, prevention, through deep learning. UBER is now collaborating with Oak Ridge to develop AI. Tencent has data intensive application and they could increase 30 percent efficiency. Last, ANZ Bank is using AI to detect risk of default, personalization for customers.

FROM K PLUS TO KADE: TRANSFORMING BANKING WITH AI

DR. THADPONG PONGTHAWORNKAMOL – PRINCIPAL VISIONARY ARCHITECT OF KLABS, KBTG

Thadpong Pongthawornkamol, Ph.D. is positioned as Principal Visionary Architect of KLabs, KBTG.
Many people have heard some stories about KADE of KBank which was behind the success of K PLUS – the top usage application in Thailand. The next step of KADE is to develop AI to crack the easy path that connects users to the platform.

Kasikorn Bank, as a digital banking leader in Thailand, has recently established new technology business unit called (KBTG) that emphasized on research and development of possible technology that includes in delivering new experiences to users. Currently, the KBTG is working on the next ten years outlook for financial industry.

Old school bank vs Today’s bank
Thadpong asked audiences about the opinion regarding the traditional (Old School) Bank? In the past, it was obvious that people needed see reliability or credibility of the bank before deciding to deposit, or even invest in long run. However, today banks have digitally transformed to deliver the easiest transaction right to users and that allows banks operate 24 hour 7 days. AI would make the platform more intelligent with personalization concept.

What’s KADE?

KADE, the K PLUS AI-Driven Experience, was established in 2016 that research on beacon for visually impaired so that they could transfer and also settle due payment online. For this particular research, the team selected color to differentiate each functions and voice order to get feedback from users. We work with functional team and also implement design thinking to initiate the product development.

In 2017, KADE has launched machine learning to the market, for example, they are using machine learning to support people with inaccessible credit scoring by using technology to deliver lending experiences to inaccessible prospects. In this case, it possibly initiates the new market to the corporation.

Predictive analysis technology will identify which segment of customers, persona, lifestyle that need the lending the most. The technology will detect and send the offerings to more accurate customers. These functions will answer the questions regarding the person who paying the bills, or even who possibly should get the loan and so on. These historical data especially debt, non-perform loan (NPL) and credit bureau will supervise its technology learning as well as the feasible model to analyze chunk of data for the best.

They have proved the technology, with the result of filtering functions; prove of 10 percent accuracy in credit scoring. Moreover, talking about the goal in 2018, the team continuously have studied on how machine learning can identify personal lifestyle. For example, the platform allows the team to analyze nature businesses/transactions particularly generated by a single user so that the user can be matched to the right products/services. We expect that we could generate model for next deep learning somehow.
Valentine’s feature was launched in February 2018 called “Pro Don Jai”. The platform allows users to order flowers to their couple via the application. As a result, it showed 2 times accuracy in both improvement and performance.

Last, Thadpong further introduced simulation technology platform that open for people to take challenge called Tech Jam (www.techjam.tech) Challenges are provided in coding squad, data science squad, and design squad. The goal is to getting to know more people and fun, he explained.

DIY FRONT-END ANALYTICS

IDAN ZALZBERG – VICE PRESIDENT – DATA, AGODA

Idan serves as the Vice President Data of Agoda. He has experienced in data science up to 18 years and also has been the key person for all data at Agoda. How has Agoda developed their data analytics? Get to learn more in DIY front-end analytics.

About Agoda

The largest online travel business in the world which has $78 billion transactions worldwide
Currently, the company is using data as a business strategy as they invested in data field for years and they are also using data driving marketing campaigns
including

  • 1.5 billion bits
  • 1.5 million ads
  • 250 million keywords
  • 14 algorithms

Idan emphasized that all algorithms are being developed on C language. He has run A/B testing experiment which covers statistical measure with AI to reducing noises of data. This would facilitate the organization draft out direction or even making decision more accurate and effective. Agoda has recently ran more than 800 experiments and has failed so many times.

Idan shared most front-end analytics failed because of bugs, system not traking and breaking over time. Notwithstanding, owning separate data warehouse and the data chunk couldn’t synchronize or correlate between servers.

For front-end analytics test, it recommended to define goals e.g. behavioral goals – more bookings, increasing visits on the map page etc. Then, we should set up hypothesis that running experiences, measurement to the specific groups of samples. This will help you reducing noises. Once you have implemented tests, every time you need to investigate what are the key point out that made hypothesis broke? How does it not work? Moreover, the requirements for backend information is also critical- running full SQL and DS access to support complexed statistics, working adaptively, controllable changes. In running experiments, you are allowed to fail and errors are fixable.

Idan added more knowledge about data infrastructure that data collection should be plan out well from the channel you implemented. You also have to visualize data pipeline; write in any languages, and see how core functions and the pipeline works through process.

DEMOCRATIZING DATA SCIENCE IN YOUR ORGANIZATION

DR. THANACHART RITBUMROONG – CO-FOUNDER, DATA CAFÉ
DR. VIROT CHIRAPHADHANAKUL – CO-FOUNDER, SKOODIO

Data cafe Thailand is the data science community that collaboratively established by MFEC and Chulalongkorn University, Faculty of Engineering. Dr. Thanachart- Data Analytics expert and university instructors, and Dr. Virot – former data scientist of Facebook, are the co-founder of the cafe. They shared experiences in applying data to leverage the organization.

From insight to value

They viewed that the fundamental of data science skills would generated by having a great content for communication and the questions that came from your curiosity. According to the referred statistics, 85 percent of organization were failed to make use or fail to extract potential insight from data chunk. Failure indicated shortfall of resources, data governance, workflow, technology platform not support the functions and cross functions, integrations of tools and so on.
People with data science skill sets are remarkably limited in the labor market. When the organization plan manpower, there is a hesitate question whether should the organization outsource or build new people.

Outsourcing has its pro that enable observing how to make use of data by adjusting people skills to new tools and work on large amount of data. However, if we scratched outsourcing but building new people to be data scientist, who are willing to learn with passion to change the traditional process?
In respect of process, many organizations wondered if they should outsource data scientists or is it value enough to create in-house team?

Fact from data scientist!

The speakers shared thoughts behind these 2 alternatives. “It is very easy to outsource people to do data reconcile and cleansing noises and that is it? While in-house option is normally seen the person always occupied by rendering report, cleansing data for safekeeping, and circling the work loop again and again.
From data scientist point of view, we normally see them great at creating graphics and stuff but do we have a question behind that? What that graphic represents to? What does it mean? That is not useful to develop the project, just to meet the deadline. Data scientist needs to understand users demand in terms of making use of data from their analytics skill. For example, Facebook implemented evaluation system every 6 months to share insight from each team and it is continuously implementing until today. That means Key Performance Indicator (KPI) for scientist should focus on finding data insight that contribute value to the business.

Naturally, the organization are looking for data scientist with maths, statistics or business acumen equipped. But the essential characteristics of data scientist should include communication skills and has sense of essential questions and curiosity e.g. creating essential questions skills. Data scientists we found in U.S. they backgrounded with physics, phycologist, and other backgrounds. They further shared the interview situation. The organization should not only focus on educational background or calculation skills, framework, tools, but also emphasize on their projects so you can see how do they apply the tools according to the situation.

Selecting technology that fit to your needs

When you are hesitating to select for fixed vendor or open source. The speakers tended to give more weight on open source due to easy to modify and flexibility. For example, cloud solutions are now open for APIs to plug in, or even mix with the existing computing model.

Democratizing data science:

Both speakers viewed that data science is the skill set for everyone even for tech or non-tech people. Everyone should have access to the data. SQL skill is required amongst product manager or R&D unit. We can see the great example from Facebook.

How to retain resources?

Leadership is critical characteristics that could maintain resources. They have to really understand the use of data, able to make use, and also develop their skills more than existing work loop.

Data science skills is now for everyone!

DATA IN THE TRANSFORMATION OF RETAILS

JOHN BERNS – CHIEF DATA OFFICER, CENTRAL GROUP

Retail is an industry being disrupt digitally from e-commerce. The challenges from big e-commerce platform stimulates traditional retails companies to change to survive.

Central Group is another great example of retails company that has been dramatically changed. We could see from their latest collaborative partner called JD from China, JD Central was launched to serve e-commerce users while department store must keep changing itself as well. Any ways of retails, we still need to collect data to analyze our customers always.

John backgrounded e-commerce in recent years and the revolution of traditional stores to online platform. The process of traditional to online is pretty similar and has variety of players in the marketplace. He found that different players have done their e-commerce differently due to their historical data and analytics. The potential insight has led them to do adverts differently on social media with more accurate target.

Data has driven the disruption to the industry. Knowing customer demand is the critical part of process that could create new experience and how to deliver it. Consumers, brands, marketplace, and experience are essential, however, there’s still nobody could integrate the information from people including social media that interprets their lifestyles and profile the essential information for uses. John further explained that getting information should include in the big picture; pieces of information will help brands making or delivering better experiences at the front store.

In terms of scaling, he gave an example of Lazada that currently 1 million types of products are categorized in the marketplace. However, we found that there are a lot of products cannot scale because of its process automation. Then, that led to the questions that popping up in his head “is it the right category, photos, and right market place?”.

He elaborated the concept and framework of Product information management (PiM) for general comprehension. PiM works and functions similar the well-known engineering process (input – process- output). For example, we should encourage how to create essential questions relating to the products and services. You have data on-hand, input data into the model we generated that we should filter, and analyze. You ensure that the result will help you generate value to your business.

Source: Blognone Tomorrow