Recommended by Diego Klabjan, Professor at Northwestern Engineering, who said: “Perhaps the biggest news in Q1 was OpenAI and their GPT-2 model. The caveat is that as a non-profit organization they decided not to release the (trained) model due to concerns related to its possible use to create fake text”.
Extract: Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.
GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
About: Trace the development of Machine Learning from the early days of a computer learning how to play checkers, to machines able to beat world masters in chess and go. Understand how large data is so important to Machine Learning, and how the collection of massive amounts of data provides Machine Learning programmers with the information they need to developing learning algorithms.
Simple examples will help you understand the complex math and probability statistics underlining Machine Learning. You will also see real-world examples of Machine Learning in action and uncover how these algorithms are making your life better every day.
Recommended by Jessica Lennard, Visa. Jessi is Director of External Relations for Visa’s Data Science Lab, having previously been Visa’s UK Head of Regulation and Public Affairs. She has been working in lobbying, policy, communications, reputation and crisis for over a decade. During that time, she has worked for political parties, businesses (start-up to FTSE 100), consultancies, think tanks and NGOs. Her particular area of expertise is highly regulated, highly politicised, technology-driven sectors, including telecoms, energy, and Fintech.
Extract: Much of the beauty in our universe is born from diversity. If everything were completely even we’d only have a grey sludge a few degrees above absolute zero evenly filling our whole universe. We’d have no galaxies, no stars, no planes, no earth, no people…
Diversity and inclusion in modern society faces a new risk: Artificial Intelligence. At its best, Artificial Intelligence can have a significant positive influence our home and work lives — Finding new cures to diseases, helping us manage our time better and finding new music that we’d never have listened to before. Without proper checks and balances, however, there’s no guarantee that we’ll see that positive future for AI, or that this gain will be shared evenly across our society. Instead we may end up with AI that limits freedom, benefits society unfairly and reinforces or amplifies existing biases.
Recommended by Robinson Piramuthu, Chief Scientist for Computer Vision at eBay. “This article has useful tips for new practitioners wanting to work on large scale retrieval.”
Extract: Visual search is becoming important. One recent report revealed that 27 percent of the searches on major websites like Google, eBay, Amazon and others are now for images. Another study indicates that 75 percent of online shoppers regularly or always search for visual content before making a purchase. The prominence of visual search in retail applications has made it a key component of success — but only if it works.
Extract: Modern machine learning is increasingly applied to create amazing new technologies and user experiences, many of which involve training machines to learn responsibly from sensitive data, such as personal photos or email. Ideally, the parameters of trained machine-learning models should encode general patterns rather than facts about specific training examples. To ensure this, and to give strong privacy guarantees when the training data is sensitive, it is possible to use techniques based on the theory of differential privacy. In particular, when training on users’ data, those techniques offer strong mathematical guarantees that models do not learn or remember the details about any specific user. Especially for deep learning, the additional guarantees can usefully strengthen the protections offered by other privacy techniques, whether established ones, such as thresholding and data elision, or new ones, like TensorFlow Federated learning.
Developed by Dimitri Kanevsky, Research Scientist at Google who spoke at the RE•WORK Deep Learning Summit in San Francisco this January.
Extract: Dimitri has worked on speech recognition and communications technology for the last 30 years. Through his work, Dimitri—who has been deaf since early childhood—has helped shape the accessibility technologies he relies on. One of them is CART: a service where a captioner virtually joins a meeting to listen and create a transcription of spoken dialogue, which then displays on a computer screen. Dimitri’s teammate, Chet Gnegy, saw the challenges Dimitri faced using CART: he always carried multiple devices, it was costly and each meeting required a lot of preparation. This meant Dimitri could only use CART for formal business meetings or events, and not everyday conversations.
That inspired Chet to work with the Accessibility team to build a tool that could reduce Dimitri’s effort spent preparing for conversations. We thought: What if we used cloud-based automatic speech recognition to display spoken words on a screen?
Extract: As AI and machine learning begin to perform ever more impressive feats in image recognition and language comprehension, we may ask: could it also transform the task of finding new drugs?
The problem is that human researchers can explore only a tiny slice of what is possible. It’s estimated that there are as many as 1060 potentially drug-like molecules—more than the number of atoms in the solar system. But traversing seemingly unlimited possibilities is what machine learning is good at. Trained on large databases of existing molecules and their properties, the programs can explore all possible related molecules.
Extract: Language understanding is a challenge for computers. Subtle nuances of communication that human toddlers can understand still confuse the most powerful machines. Even though advanced techniques like deep learning can detect and replicate complex language patterns, machine learning models still lack fundamental conceptual understanding of what our words really mean.
That said, 2018 did yield a number of landmark research breakthroughs which pushed the fields of natural language processing, understanding, and generation forward.
About: Nearly half of all working Americans could risk losing their jobs because of technology. It’s not only blue-collar jobs at stake. Millions of educated knowledge workers—writers, paralegals, assistants, medical technicians—are threatened by accelerating advances in artificial intelligence.
The industrial revolution shifted workers from farms to factories. In the first era of automation, machines relieved humans of manually exhausting work. Today, Era Two of automation continues to wash across the entire services-based economy that has replaced jobs in agriculture and manufacturing. Era Three, and the rise of AI, is dawning. Smart computers are demonstrating they are capable of making better decisions than humans. Brilliant technologies can now decide, learn, predict, and even comprehend much faster and more accurately than the human brain, and their progress is accelerating. Where will this leave lawyers, nurses, teachers, and editors?
RE•WORK White Paper, contributed to by leading minds in AI from Google, Shell, WeWork, XPRIZE, University of Waterloo, MIT-IBM Watson AI Lab and more.
About: With the rapid advancements and applications of AI, conversations have increased around the intentions of this technology. There are concerns that AI could be used with malicious intent rather than for the benefit of human-kind. This paper explores areas where artificial intelligence can benefit society and tackle global challenges such as the environment, education, healthcare and sustainability. Topics including Global AI Inititiaves, the Challenges of AI, Benefits of AI for Social Good, the Future of AI for Good are covered, as well as case studies from Google, Intuitive AI and GoodAI.