We are working on compute infrastructure for big data and artificial intelligence (AI). In addition to existing software and network technology, we employ “accelerators” to break through many limitations. Below are some selected research topics.
- On-device learning for unsupervised anomaly detection
- Acceleration on machine learning by network-attached FPGAs
- Acceleration on various NoSQL data stores
- Acceleration on big data processing frameworks
- Rack scale architecture for virtual reality
- Data center network with light beam
- 3D stacking many-core processors with wireless chip interconnect
On-device learning for unsupervised anomaly detection (2017-present)
Toward on-device learning, we are working on a neural-network based online sequential learning and unsupervised anomaly detection (OSL-UAD) algorithm and its related technologies. The target domain was production lines in factories, but our application is currently expanding, such as anomaly detections in datacenter, home, and UAVs. One of the biggest issues when applying AI to industries is to prepare training data sets for all the possible situations, because noise pattern (e.g., vibration) fluctuates and status of products/tools varies with time. Our OSL-UAD learns normal patterns including noises in a placed environment extemporarily to detect unusual ones, so no prior training is needed. Below are demo videos.
You can find our on-device learning approach in the following papers.
- Mineto Tsukada, Masaaki Kondo, Hiroki Matsutani, “OS-ELM-FPGA: An FPGA-Based Online Sequential Unsupervised Anomaly Detector”, Proc. of the 24th International European Conference on Parallel and Distributed Computing (Euro-Par’18) Workshops, pp.506-517, Aug 2018. [Paper]
- Tomoya Itsubo, Mineto Tsukada, Hiroki Matsutani, “Performance and Cost Evaluations of Online Sequential Learning and Unsupervised Anomaly Detection Core”, Proc. of the 22nd IEEE Symposium on Low-Power and High-Speed Chips and Systems (COOL Chips 22), pp.1-3, Apr 2019. [Paper]
Acceleration on machine learning by network-attached FPGAs (2014-present)
For anomaly detection on high-bandwidth network traffic, we are working on anomaly detection algorithms on FPGA-based high-speed network interface card (FPGA NIC) including outlier detection (mahalanobis distance, k-nearest neighbor, local outlier factor), change-point detection (SDAR), and anomaly behavior detection (HMM).
Below is a demo video in which ChangeFinder algorithm is implemented on FPGA-based 10Gbit Ethernet NIC.
You can find an overview of our research on “acceleration on machine learning by network-attached FPGAs” in the following invited talk slides.
- Hiroki Matsutani, “Accelerating Anomaly Detection Algorithms on FPGA-Based High-Speed NICs”, The 18th International Forum on MPSoC for Software-defined Hardware (MPSoC’18), Invited Talk, Aug 2018. [Slide]
Acceleration on various NoSQL data stores (2013-present)
We are working on performance acceleration of various structured storages (aka NoSQLs) including key-value store, column-oriented store, document-oriented store, and graph database by using network-attached FPGAs and network-attached GPUs. We are also working on acceleration of bitcoin/blockchain search.
Below is a demo video in which a key-value store is accelerated by FPGA-based 10Gbit Ethernet network interface card.
You can find an overview of our research on “acceleration on various NoSQL data stores” in the following invited talk slides.
- Hiroki Matsutani, “Accelerator Design for Various NOSQL Databases”, The 16th International Forum on MPSoC for Software-defined Hardware (MPSoC’16), Invited Talk, Jul 2016. [Slide]
Acceleration on big data processing frameworks (2014-present)
Big data processing system typically consists of various software components, such as message queuing, RPC, stream processing, batch processing, machine learning framework, and data stores. We are working on their performance acceleration by using network-attached FPGAs and network-attached GPUs.
Below is a demo video where Apache Spark is accelerated by network-attached GPUs via 10Gbit Ethernet. RDDs are cached in device memory of these remote GPUs.
In addition, stream processing is accelerated by a network-attached FPGA.
You can find an overview of our research on “acceleration on big data processing frameworks” in the following invited talk slides.
- Hiroki Matsutani, “Accelerator Design for Big Data Processing Frameworks”, The 17th International Forum on MPSoC for Software-defined Hardware (MPSoC’17), Invited Talk, Jul 2017. [Slide]
Rack scale architecture for virtual reality (2015-2018)
As an example of our rack-scale architecture technology, remote GPUs connected via 10Gbit Ethernet are pooled and used for virtual reality applications on demand. Below is a demo video.
Data center network with light beam (2012-2018)
You can find a demo video, in which a 40Gbps light beam is established between two computers and then virtual machine (VM) migration is performed using this “VM highway”.
3D stacking many-core processors with wireless chip interconnect (2009-present)
You can find an overview of our research on “3D stacking many-core processors with wireless chip interconnect” in the following special session slides.
- Hiroki Matsutani, “A Building Block 3D System with Inductive-Coupling Through Chip Interfaces”, The 36th IEEE VLSI Test Symposium (VTS’18), Special Session, Apr 2018. [Slide]
Department of Information and Computer Science
3-14-1 Hiyoshi, Kouhoku-ku, Yokohama, Kanagawa, JAPAN 223-8522
Rooms: 26-207 and 26-210A
Yagami Campus, Keio University
2nd-Year Master Course Students
- Takuma Iwata
- Kaho Okuyama
- Mineto Tsukada
1st-Year Master Course Students
- Tomoya Itsubo
- Rei Ito
- Tokio Kibata
- Koji Suzuki
4th-Year Bachelor Course Students
- Yuto Ozeki
- Takuya Sakuma
- Keisuke Sugiura
- Masaki Furukawa
- Hirohisa Watanabe