The thirty-fourth conference on Neural Information Processing Systems took place last week. Unlike last year where many of us traveled to Vancouver in person, this year GoodAI, just like everyone else could only attend virtually. With nearly 2000 research articles and countless workshops and tutorials there was an overwhelming amount of exciting new ideas, fascinating research, deep discussions and virtual socializing done in the seven days over which the conference spanned.
Check out our top picks or workshops, tutorials, and articles here.
Originally published at https://www.goodai.com on December 16, 2020.
The article below summarizes the findings of a recent article published in Nature (link here) co-authored by GoodAI’s Senior Research Scientist Nicholas Guttenberg.
This image shows the long-term evolutionary and ecological impacts of major events of extinction and speciation. Colours represent the geological periods from the Tonian, starting 1 billion years ago, in yellow, to the current Quaternary Period, shown in green. The red to blue colour transition marks the end-Permian mass extinction, one of the most disruptive events in the fossil record.
Scientists have long believed that mass extinctions create productive periods of species evolution, or “radiations,” a model called “creative destruction.” However, new analysis provide evidence that this is not necessarily the case. A new study led by scientists affiliated with the Earth-Life Science Institute at Tokyo Institute of Technology used machine learning to examine the co-occurrence of fossil species and found that radiations and extinctions are rarely connected, and thus mass extinctions generally don’t cause mass radiations. …
The VeriDream project, an international consortium of six organizations across Europe, has been awarded €2 million by the European Innovation Council to carry out a research and innovation strategy to improve robotic performance at small and medium-sized enterprises (SMEs) using artificial intelligence (AI). The project has started in October 2020, and will run for two years.
Robots act in the real world. When deploying AI methods on robots, the continuous and dynamic nature of the physical world raises many challenges which are not encountered in purely digital domains such as Internet search and social networks. To address these challenges, VeriDream builds on the and DREAM RobDream research projects to pursue a two-fold innovation strategy for AI in robotics. Its deep innovation strategy will strive to achieve high technological readiness in a set of use cases at a warehouse logistics start-up. …
By Olga Afanasjeva
What kind of AGI do we want?
To me, this question means, what do humans value, and what do we want the world to be like in the millennia to come. What kind of universe do we want to create, and for that, do we need an AGI that is a succession of a human mind, or something qualitatively different?
Next to the notions of mathematical perfection, equilibrium, and optimality, the human mind comes across as somewhat imperfect. But the mind’s erratic nature is exactly what makes us dream and sparks our progress: we’re impulsive, curiosity-driven, looking for excitement and surprise, thrilled by our ability to discover, create and influence things in unexpected ways. …
Artificial intelligence (AI) research and development company, GoodAI, has awarded $24K of a $300K fund, to a group of researchers from the Czech Institute of Informatics, Robotics, and Cybernetics CTU, in order to carry out a project that both parties believe will advance the progress towards human-level artificial intelligence. The project will be led by Senior Researcher Tomáš Mikolov, who invented the popular Word2vec natural language processing algorithm and recurrent neural language models, and has previously worked as a research scientist at Facebook AI and Google Brain.
The goal of this research project is to build on and improve novelty search, essentially allowing AI to learn and discover new things during its lifetime, rather than learning only through maximization of some objective function. …
By Simon Andersson
AI agents often operate in partially observable environments, where only part of the environment state is visible at any given time. An agent in such an environment needs memory to compute effective actions from the history of its actions and observations. The agent, then, is faced with the difficult problem of simultaneously learning to maintain a representation of its history and to compute the right actions from it. …
Read the full article here.
Originally published at https://www.goodai.com on August 24, 2020.
Last week GoodAI organized the first Meta-Learning & Multi-Agent Learning Workshop which throughout the week saw over 60 participants from across the world take part including speakers from Google Brain, DeepMind, OpenAI, University of Oxford, Stanford University, MIT and many more.
The workshop consisted of 19 talks as well as discussions and Big Picture panels and was inspired by GoodAI’s Badger Architecture . It covered topics such as:
At the start of the workshop, GoodAI announced the new initiative, a GoodAI Grants $300,000 grant fund that will be used to support researchers or research groups working on topics that can build on and improve GoodAI’s Badger Architecture.
You can read the full article and watch 14 of the videos here.
Originally published at https://www.goodai.com on August 21, 2020.
The fund was announced today at the Meta-Learning & Multi-Agent Learning Workshop which is being run by GoodAI. The private workshop, taking place mostly online, was joined on the first day by over 50 participants from across the world and will see speakers from Google Brain, DeepMind, University of Oxford, University College London, MIT, and many more throughout the week.
The fund is open to researchers, or research groups, from across the world who are interested in tackling some of the open questions related to GoodAI’s Badger Architecture. …
By Jaroslav Vítků
Usually, in Deep Learning, the tasks are solved by a big monolithic Artificial Neural Network.
Compared to this, one of the properties of the Badger architecture is modularity: instead of using one big neural network, the Badger should be composed of many small Experts which solve the whole task in a collaborative manner. Further assumption is that these Experts share the weights, therefore are identical at the beginning.
Despite the fact that this approach introduces several complications, it has many benefits as well.
Several benefits of modularity include: