MENU

Research Introduction

In our laboratory (among AI in general), we approach from machine learning and mathematical optimization to urban traffic, disaster mitigation, ground classification, railroads (operation arrangement and stop control), and energy optimization problems.

Therefore, we welcome students from the four systems who are interested in the "development of learning algorithms to solve the target problem (approach side)" in the "Mechanical and Applied Physics" and "Electrical and Information," or in emergent design by artificial life (ALife), facility placement by multi-agent technology, autonomous distributed systems, affordance, and other human decision-making in the "Architecture and Design." Furthermore, we are deeply related to chemical engineering control systems and energy management as subjects of "Applied Chemistry and Environment."

Not long ago, "intelligent something" was all the rage; now it's the AI boom, emphasizing deep learning.

Indeed, supported by a large amount of data, the cognitive and discriminative abilities may have surpassed humans'.

However, what about "judgment" and "action" based on recognition?

  • How do we create new energy, and how do we use it? (Multi-stage planning: sequential decision making)
  • What is autonomous driving according to the preferences of the people who ride? (Multi-objective planning: multi-objective sequential decision making)
  • If we can diagnose the vulnerability of the ground, can we do the right thing?

"Do the Right Thing" - the title of a book by one of my supervisors, Professor Stuart Russell.

I think it is important to note the word "Right" instead of "Optimal."

In engineering, a strictly closed world, we define an objective function and aim to maximize it. In reality, however, actions in a world of uncertainty and dynamic change are outputs (decisions) with subtle changes in objectives depending on the situation (free to raise or lower the bar). Understanding the mechanisms and incentives for setting objectives motivates AI research in our laboratory.

Q & A on AI

What will happen when everyone becomes smart, including machines and humans?

  • The knowledge that maximizes self-interest
  • The knowledge that prioritizes the interests of the community to which one belongs and the interests of one's master
  • The knowledge that wishes to restore the environment of the entire planet

Which intellectual level can we speak of as smart?

Here comes the dilemma problem of multi-agent systems.

Our research focuses on "action" and "decision-making" methods that serve as a drop-off point for the interests of the Earth = the entire system including humans who use "transportation, railroads, and energy."

The banner of the mechanism for thinking at the level of both the individual and the whole may be the "emergence of cooperation," but we are all members who dislike an "overbearing superciliousness." It is our policy to "minimize top-down restrictions as much as possible."

"A little bit of cleverness, flexibility, and resourcefulness" -- we think we can achieve this if we can afford to look around the neighborhood a little. The recent spread of social media and crowdsourcing is a wonderful product embodied in the trend toward smarter "connecting 'advanced knowledge' with 'a little bit of smarts.'" These do not seem to be cooperative efforts born out of a "let's make the world a better place" kind of idea, like the patriots of the Meiji Restoration (that's why I like them). ... Wouldn't you like to elucidate these familiar phenomena one by one?

JAPANESE