derik pridmore

HIRING: deep reinforcement learning startup in SF

1 post in this topic

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Related Content

    • By Khue
      New offers will be updated frequently in this topic.
    • By Khue
      Google DeepMind team has published their paper on Mastering the game of Go with deep neural networks and tree search in Nature on 28th January 2016, describes a new approach to computer Go that combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play. This is the first time ever that a computer program has defeated a human professional player.
      The game of Go is widely viewed as an unsolved “grand challenge” for artificial intelligence. Despite decades of work, the strongest computer Go programs still only play at the level of human amateurs. In this paper they describe a Go program called AlphaGo. This program was based on general-purpose AI methods, using deep neural networks to mimic expert players, and further improving the program by learning from games played against itself. AlphaGo won over 99% of games against the strongest other Go programs. It also defeated the human European champion by 5–0 in tournament games, a feat previously believed to be at least a decade away.
      In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.
      Paper: deepmind-mastering-go.pdf
    • By Khue
      Microsoft is making the tools that its own researchers use to speed up advances in artificial intelligence available to a broader group of developers by releasing its Computational Network Toolkit on GitHub.
      The researchers developed the open-source toolkit, dubbed CNTK, out of necessity. Xuedong Huang, Microsoft’s chief speech scientist, said he and his team were anxious to make faster improvements to how well computers can understand speech, and the tools they had to work with were slowing them down.
      So, a group of volunteers set out to solve this problem on their own, using a homegrown solution that stressed performance over all else.
      The effort paid off.
      In internal tests, Huang said CNTK has proved more efficient  than four other popular computational toolkits that developers use to create deep learning models for things like speech and image recognition, because it has better communication capabilities
      “The CNTK toolkit is just insanely more efficient than anything we have ever seen,” Huang said.
      Those types of performance gains are incredibly important in the fast-moving field of deep learning, because some of the biggest deep learning tasks can take weeks to finish.

      Over the past few years, the field of deep learning has exploded as more researchers have started running machine learning algorithms using deep neural networks, which are systems that are inspired by the biological processes of the human brain. Many researchers see deep learning as a very promising approach for making artificial intelligence better.
      Those gains have allowed researchers to create systems that can accurately recognize and even translate conversations, as well as ones that can recognize images and even answer questions about them.
      Internally, Microsoft is using CNTK on a set of powerful computers that use graphics processing units, or GPUs.
      Although GPUs were designed for computer graphics, researchers have found that they also are ideal for processing the kind of algorithms that are leading to these major advances in technology that can speak, hear and understand speech, and recognize images and movements.
      Chris Basoglu, a principal development manager at Microsoft who also worked on the toolkit, said one of the advantages of CNTK is that it can be used by anyone from a researcher on a limited budget, with a single computer, to someone who has the ability to create their own large cluster of GPU-based computers. The researchers say it can scale across more GPU-based machines than other publicly available toolkits, providing a key advantage for users who want to do large-scale experiments or calculations.

      Xuedong Huang (Photography by Scott Eklund/Red Box Pictures)
      Huang said it was important for his team to be able to address Microsoft’s internal needs with a tool like CNTK, but they also want to provide the same resources to other researchers who are making similar advances in deep learning.
      That’s why they decided to make the tools available via open source licenses to other researchers and developers.
      Last April, the researchers made the toolkit available to academic researchers, via Codeplex and under a more restricted open-source license.
      But starting Monday it also will be available, via an open-source license, to anyone else who wants to use it. The researchers say it could be useful to anyone from deep learning startups to more established companies that are processing a lot of data in real time.
      “With CNTK, they can actually join us to drive artificial intelligence breakthroughs,” Huang said.
      Microsoft Blog