More than 200 people, mostly engineers from LINE and its affiliates, will be speaking at this year’s LINE DEV DAY, with a total of 156 sessions planned.
There will be 30 sessions (20 lectures and 10 panel discussions) with a total of 36 domestic and international guest speakers.
This article will provide a summary of the 20 guest lecture sessions at LINE DEVELOPER DAY 2020, as well as profiles of the speakers.
Machine learning (ML) technology has seen significant development in recent years, surpassing human performance in areas such as speech recognition, image analysis, and natural language translation. However, there has been less progress in application fields such as medicine and disaster prevention, where high quality labeled big data cannot be gathered easily, so there is still room for further improvement. This session will outline the history of ML research and the present state of the international ML research community. We will also introduce weakly supervised ML methods and robust ML methods developed by our group, and finally discuss where ML should go next with the audience.
RIKEN / Center for Advanced Intelligence Project / Head of the Center
The University of Tokyo / Graduate School of Frontier Sciences / Professor
Appointed as a professor at the University of Tokyo in 2014, after completing a masters in 2001 and then being a research assistant and associate professor at the Tokyo Institute of Technology. Develops theory, algorithms, and industrial applications for ML. Won the Japan Society for the Promotion of Science (JSPS) Prize and scholarship award from Japan Academy in 2016. Head of the RIKEN Center for Advanced Intelligence Project (AIP) from 2016, coordinating AI research groups for general-purpose basic technology, goal-directed basic technology, and AI in society.
The dramatic spread of smartphones equipped with high performance sensors has enabled filming and recording of other people’s faces and voices, and even collection of large quantities of biometric data such as fingerprints and irises. These can also be shared in cyberspace, not only violating privacy but also illustrating the risk of breaches of biometric authentication. This high-quality biometric data can be used as learning data, making it easy to create high-quality fake media such as deepfakes, which may negatively impact people’s ability to make decisions. This session will outline these threats and introduce technology for users to control distribution of their own biometric data in cyberspace and technology for detecting fake media, as well as anonymization technology.
National Institute of Informatics / Professor
Master’s (Applied Physics), Tokyo Institute of Technology, 1997. Worked at Hitachi Systems Development Lab. Advisor to Director General & Prof of Information & Society Research Division, NII. Prof in the Dept of Information & Communication Engineering, Graduate School of Information Science & Technology, University of Tokyo. Visiting prof at University of Freiburg 2010. Information Security Culture Award 2016, Docomo Mobile Science Award 2014. Japanese representative at IFIP TC11. PhD (Engineering).
With the rapid advancement of AI making use of machine learning, AI has become a mainstay in many fields in Japan, including facial recognition, crime prevention, and automated driving systems. Meanwhile, the methods of attacking AI have multiplied as well. Adversarial Examples cause AI to make false predictions by introducing small changes into the entered data, AI learning data can be manipulated to corrupt data and open a back door, and functions in frameworks used for AI development can be abused, thereby enabling systems to be hijacked for attacks. These have caused a rapid need for technology to defend AI, but because many of the principles governing attacks on AI differ fundamentally from those on existing systems, it is difficult to respond using old security technology, so we need defensive technologies specifically for AI. In this session, entitled “the basics of AI security”, I’ll explain the mechanisms of attacking AI, and an overview of the technologies for defending AI.
He is a Registered Information Security Specialist and CISSP, involved in research related to the detection of vulnerabilities in ML and the application of ML to security tasks. He has presented his research at globally celebrated hacker conferences such as Black Hat Arsenal, DEFCON, and CODE BLUE. In recent years, he has made contributions to education as an instructor at security camps, as a judge for AI security competitions at international hacker conferences.
Before the cloud era, network administrators assumed they could use IP address or port number as a workload identifier in network access control. Routers or firewalls were configured with rules called network ACLs containing IP addresses or port numbers for packet filtering.
However, as seen in many large-scale data centers, it is becoming
challenging to use IP addresses or port numbers to identify workloads in dynamic mixed environments consisting of varied workloads like virtual machines or containers on a flat IP Clos network. How can we implement efficient access control in these environments? This session introduces a packet-filtering-based access-control technology that uses workload identity, which we are proposing in association with the LINE Verda office.
I completed a PhD program in Intelligence Science and Technology at Kyoto University Graduate School of Informatics in 2016, and hold a PhD in Informatics. I joined the Kyoto University Academic Center for Computing and Media Studies as an assistant professor in April 2016, researching and developing computer networks and network security.
Kata Containers is an open source community working to build a secure container runtime with lightweight virtualization technology that implement standard container interfaces and perform like containers, but provide stronger workload isolation.
In Ant Group and Alibaba, we have thousands of tasks running in Kata
containers. Besides stronger security boundary, resources isolation and performance stability are also key factors for the large deployment. We are also constantly looking at how to get the best performance out of Kata Containers and we’ll share our experience so far in the session.
Kata Containers 2.0 will be released in October on the OpenInfra Summit this year.
Ant Group / Staff Engineer
Tao is a container runtime engineer at Ant Group. He is one of the core maintainers of Kata Containers project. He advocates cloud native and container technologies, and is experienced in system programing and optimization.
CERN, the European Laboratory for Particle Physics, provides the infrastructure and resources to thousands of scientists all around the world to uncover the mysteries of the Universe.
Scientific computing requires not only massive amounts of compute
resources but also a flexible and scalable Infrastructure. In the quest to meet these requirements CERN deployed in 2013 a Private Cloud Infrastructure based in OpenStack to support its users and the
Organization internal services. Over the years it moved from a few hundred cores to a multi-cell deployment spread between different regions.
Now with several years of experience managing a large OpenStack cloud we will dive into a day in the life of an OpenStack operator at CERN.
We will explain the Infrastructure architecture and the daily challenges of OpenStack CERN operators. We will discuss how we keep the Infrastructure running and evolving from Upgrades to User tickets.
CERN – European Organization for Nuclear Research / Cloud Architect
Belmiro Moreira is an enthusiastic software engineer passionate about the challenges and complexities of architecting and deploying Cloud Infrastructures in very large-scale environments. He works at CERN and during the last 10 years his main role has been to design, develop and build the CERN Cloud Infrastructure based on Openstack. Previously he worked in different virtualization projects to improve the large batch farm at CERN. Belmiro also holds a degree in Mathematics.
This session will explain the OpenChain industry standard for open source license compliance, explore its journey from de facto into formal ISO International Standard, and outline how adoption is spreading throughout the global supply chain. A key takeaway will be how OpenChain will provide a positive impact on your consumption or deployment of open source software.
Linux Foundation / OpenChain General Manager
Shane Coughlan is an expert in communication, security and business development. He currently leads the OpenChain community, is an advisor at the United Nationals Technology Innovation Labs, and serves on various boards.
40 years after the birth of TCP and 20 years after the birth of TLS, the basic technology for Internet communications, the standardization of a new transport layer protocol called QUIC, which will replace both TCP and TLS, is now entering a critical phase. Will QUIC and HTTP/3 improve the user experience? Will security, operations and monitoring methods change? In this session, the presenters, who have been involved in both standardization and implementation, will discuss the problems of existing protocols such as TCP, TLS, and HTTP/2 and how QUIC and HTTP/3 will solve them. And how QUIC and HTTP/3 will solve these problems and what changes are expected in the future.
Fastly / Principal OSS Engineer
Oku Kazuno is the lead developer of HTTP servers (H2O), TLS Libraries (picotls), and QUIC libraries (quicly) used in Fastly and other companies. In addition to writing programs, he is also involved in the standardization of protocols with the IETF. Currently he is hard at work writing extensions for HTTP and TLS.
When neural networks re-gained popularity in speech recognition about 10 years ago they were mainly used for the acoustic model of the system (the model that relates the audio with phonetic units). To obtain a complete recognition system, those models would be combined with a language and pronunciation model. Due to ongoing research, recent years have shown that speech recognition systems can be built that is a singular neural network that encompasses the entire speech to text system, the so-called end-to-end systems. These models are of interest as they are compact, accurate due to their joint optimization and easy to build as there is very little need for manual design. On the other hand, in contrast to the previous systems, they have given rise to a number of research problems related to control and online operation of such models. This talk will describe some of the research Google has done to address such issues.
Google / Senior Staff Research Scientist
Michiel Bacchiani has been a speech researcher with Google for the past 15 years. He currently manages a research group in Google Tokyo focused on jointly modeling speech and natural language understanding. Previously he managed the acoustic modeling team responsible for developing novel neural model architectures for Google speech recognition products. He previously worked at IBM Research, AT&T Labs Research and Advanced Telecommunications Research labs in Kyoto, Japan.
Fugaku is the first ‘exascale’ supercomputer due to its demonstrated performance in real applications, as well as reaching actual exaflops in new breed of benchmarks such as HPL-AI. But the importance of Fugaku is “applications first” philosophy under which it was developed, and its resulting mission to be the centerpiece for rapid realization of the so-called Japanese ‘Society 5.0’ as defined by the Japanese S&T national policy. As such, Fugaku’s immense power is directly applicable not only to traditional scientific simulation applications, but can be a target of Society 5.0 applications that encompasses conversion of HPC & AI & Big Data as well as Cyber (IDC & Network) vs. Physical (IoT) space, with immediate societal impact with its technologies utilized as Cloud resources. In fact, Fugaku is already in partial operation a year ahead of schedule, primarily to obtain early Society 5.0 results including combatting COVID-19 as well as resolving other important societal issues.
RIKEN / Center for Computational Science (R-CCS) / Director
The director of Riken R-CCS, Japan’s top-tier HPC center in research as well as developing and hosting ‘Fugaku’, the fastest supercomputer in the world in all 4 major rankings (2020). Also a Professor at Tokyo Institute of Technology, continuing his research in HPC, Big Data and AI. Commendations include the ACM Gordon Bell Prize (2011) and the IEEE Sidney Fernbach Award (2014), both being the highest awards in HPC, as well as being the Program Chair for ACM/IEEE Supercomputing 2013 (SC13).
With the rapid development of deep learning, the accuracy of image and voice recognition has come to exceed that of humans. In the future, machine learning will likely come to play a role in making high-stakes decisions involving humans and society. For AI to support human perception and decision making, the reliability of an AI must be comparable to that of a human. In this presentation, we will discuss the issues that must be solved in order to develop trustworthy AI and introduce efforts being made to realize the birth of trustworthy AI in the near future.
University of Tsukuba / Faculty of Engineering, Information and Systems / Professor
RIKEN / Center for Advanced Intelligence Project / Team Leader
Graduated from Tokyo Institute of Technology Interdisciplinary Graduate School of Science & Engineering 2003. PhD (Engineering). Joined IBM Research Tokyo. Research assistant (teaching assistant) at Tokyo Institute of Technology Interdisciplinary Graduate School of Science & Engineering 2004. Associate prof at University of Tsukuba Systems & Information Engineering department 2009, became prof 2016. Team leader of AI Security & Privacy Team at RIKEN Center for Advanced Intelligence Project 2016.
Many of you have probably heard people say things like “Oh, you can’t do this on the web.” or “It’s better to do this with an app.” However, in recent years the web has become more versatile than ever before! In this session, we introduce technologies and real-world examples of APIs, PWAs, and others that meet web standards and use UX similar to that of apps. We also talk about web performance – a hot topic in recent years – explaining how speed has become a main demand, outlining important metrics, and giving actual examples of measures taken for improvement.
Google / Web Ecosystem Consultant
I am a tech consultant for Google Japan, where I have been working since 2018. I oversee promotion of partner-facing technology in the web domain, and am currently working to popularize PWA, AMPs, and similar technologies. I enjoy improving site performance and new web technologies.
In this talk, we discuss the emergence of the Lakehouse technology pattern, combining the best elements of data lakes and data warehouses. Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low cost storage used for data lakes. We will also discuss some of the foundational technologies we are building at Databricks to enable Lakehouse, including Apache Spark, Delta Lakes, and Delta Engine.
Databricks / Cofounder & Chief Architect
Reynold Xin is a co-founder and Chief Architect at Databricks. He’s also co-creator as well as the top contributor to the Apache Spark project. He received a PhD in Computer Science from the University of California, Berkeley.
Through a multitude of enterprises, including search engines, media, and e-commerce, Yahoo! JAPAN has accumulated a wealth of data, which it analyzes to further improve their services.
In November 2019, Yahoo! launched Yahoo! Data Solutions in order to let external companies and independent organizations utilize the power of data, and the service has been growing ever since.
In this session we will go over the fundamental technologies that support the data solution service and outline the results we have seen from the service so far.
Yahoo Japan / Division Data Group, Data Application, Data Solution Department / Senior Manager
I joined Yahoo! in 2003. After working in service development, platform operations, and internal data-driven systems, I was put in charge of the department overseeing system development for data solution enterprises.
This session explains the benefits, selection guidelines and implementation methods for using LIFF, the technology behind LINE MINI App, and the Messaging API, vital to communications, together with AWS serverless architecture—so you can start developing services using LINE tomorrow. Why do LINE MINI App, LIFF, and Messaging API work so well with AWS serverless architecture? What architectures are possible? We’ll look at how they affect production workload from the perspective of scalability, deployment and operational monitoring. We answer the “why, what and how” of LINE with AWS Serverless, based on service-development use cases using LINE and recent AWS updates.
Amazon Web Services Japan / Internet Media Solutions Department, Technology Management Division / Solution Architect
As a systems integrator, I mainly developed infrastructure for mission-critical carrier systems and high-traffic entertainment industry web systems, and I became interested in business-linked architecture design and realized the potential of the cloud. I joined AWS in May 2019 and currently support digital-native/internet-media clients among others.
It is fun to write a library or a framework. It allows us to play with many interesting ideas that were not possible before due to the constraints in others’ work. However, utmost care must be taken to build it great.
In this session, Trustin Lee, the founder of Netty project and Armeria, shares you the opinionated key practices from his recent works which might be useful when you build your own library or framework, or even designing an API for your project.
Trustin is a software engineer at Databricks who enjoys designing frameworks and libraries that yield the best experience to developers. He is often known as the founder of Netty and Armeria. Netty is the most popular asynchronous networking framework in JVM ecosystem that powers a countless number of large scale services in the industry. Armeria is a microservice framework built on top of Reactive Streams and Netty with smooth migration to reactive paradigm in mind by providing convenient integration with wide range of technologies, including gRPC, Kotlin and Spring Boot.
The heart of open source is people working together for shared benefit. Sounds nice, but does that happen in real life? This session will answer resoundingly yes. We will cover different types of collaboration practice, including shared brainstorming, maintenance and stewardship. You’ll learn what communication tools are involved and what types of behaviors set up success. “Open Source collaboration: from A to Z” is a real story that continues today. We will use real events from two projects: Armeria and Zipkin, to exemplify practice that resulted in mutual gain. While these two projects are both open source, many of these practices work in any software practice. Throughout the session, consider what you work on and if there are overlaps with open source you consume. Perhaps when you leave, you will know how to not just give back, but improve your collaboration skills at the same time!
Adrian has been a routine contributor to open source for over ten years. He’s also founded a few projects, notably jclouds and feign. Currently, he spends more time on Zipkin, a volunteer-led distributed tracing project.
Cofacts is a system for fact-checking through crowdsourcing and mainly focuses on relaying messages on LINE. It serves mostly users in Taiwan and also a forked project for a few users in Thailand. LINE is the most popular messaging app in Taiwan. It widely uses both personal and business.
However, unverified information can quickly go viral via LINE. Users can send the messages to the Cofacts system through the LINE chatbot, and then they receive the results of fact-checking from editors.
The system interacts with users in a chatbot fashion, replying to instant queries with the underlying fact-checking system for users to get different perspectives and reduce the flow of misinformation.
Cofacts is entirely open, including the data set of unverified messages, fact-checking reports, and source code. Cofacts was founded at the end of 2016 and combined different tech stack on the LINE platform. This session will take you to know how we built Cofacts on top of LINE.
KuanHung Kuo (ggm)
Cofacts.org & g0v.tw
KuanHung Kuo is a LINE API expert, a g0v participant, a Full-stack developer, and Startup co-founder. He has been working in startups as Co-founder for almost ten years and has participated in civic movements and developed civic-tech projects. In terms of technical ability, he is good at back-end and algorithm design, ACM-ICPC medalist.
Managing a service is a constant battle against instability. As time progresses and the number of users increases, services stray further from their initial ideals and purpose, sacrificing stability in the process.
In this session, we discuss our thoughts on why and how services become unstable, how to grow without losing sight of your original intentions, how to create systems that accommodate these solutions, and how to build services that can last without becoming unstable.
Fukatsu Takayuki is an interaction designer. After working at tha ltd., he became highly active in the Flash community. After going independent in 2009, he focused his activities mainly in smartphone app UI design, going on to establish the creative firm THE GUILD for Art & Mobile. His current activities include acting as CXO for NOTE Co., Ltd., which manages the note media platform. He is also a prolific writer and public speaker.
CEO of the Product. Product “owner”. “The Business”. With terms like these, it is no wonder that the relationship between Product Managers and Engineers can be contentious at times. Better products get built when teams are collaborating and working together across disciplines. Product managers and engineers provide needed balance to each other. But too often the balance gets shifted and an “us vs them” mentality emerges. The lack of trust erodes collaboration and keeps teams from operating at a high-performance level. This talk aims to address some common misconceptions about product managers, and discuss how product managers and engineers can work better together to reach a common goal – building products people love.
Mind the Product / Chief of Staff
Emily Tate is Chief of Staff for Mind the Product, the world’s largest community of product people. Prior to this, Emily spent over 10 years in product leadership as a consultant with Pivotal Labs and product manager in the travel industry. Emily is passionate about helping people level up their product and leadership skills and enjoys talking about new ways to make products people love. She was listed as one of “52 Women Making an Impact in Product Management.”
Above is a list of 20 guest speakers and sessions. We hope you’ll find it helpful in choosing which sessions to attend. We look forward to seeing you all register & attend!