This AI newsletter is all you need #26
Author(s): Towards AI Editorial Team Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. What happened this week in AI We were interested to see two new models out this week which we think can increase the flexibility and capabilities of ML towards search, document and data processing. OpenAI released its new and improved embedding model which outperforms Davinci at most tasks at a 99.8% lower price. The new model replaces five separate models for text search, text similarity, and code search, while increasing context length 4x and reducing embedding size. The model is a more powerful tool for natural language processing and code tasks and we think it has lots of interesting applications including for semantic search. While not related, we were also interested to see Microsoft release its new Universal Document Processing (UDOP) model. It is a foundation Document AI model for document understanding and generation tasks, where text is structurally embedded in documents, together with other information like symbols, figures, and style. It sets the state-of-the-art on nine Document AI tasks across various domains and ranks first on the Document Understanding Benchmark leaderboard. We think these models are both potentially parts of a toolset for building AI applications where accuracy, relevancy and reliability of data recall and understanding are important. Towards AI and Learn Prompting competition collaboration We are also organizing a fun competition around prompting in collaboration with our friend Sander and the open-source course Learn Prompting! We will be kicking off the competition this week on the Learn AI Together Discord server (with a fun live stream!) and will announce it in next week’s newsletter too. Stay tuned — the competition (accessible for all) will hold until December 31st! Join us on Discord and enter our competition to have the chance to win cool prizes! Towards AI Job offer We are continuing to look for contractors to join Towards AI (~10 hours per month) to work on building learning resources (mostly open-source) for our community. We are looking for experience in one or both of the following: NLP (LLMs implementation & prompting). Image generation models (implemented stable diffusion or other image synthesis model and experience prompting (or fine-tuning) with them). Message me @Louis B on Discord or by email for more information! Hottest News Stanford CRFM’s PubMedGPT 2.7BA new 2.7 billion parameter language model trained on biomedical abstracts and papers. The GPT-style model is capable of strong performance on a range of biomedical natural language processing tasks, including a new state of the art performance on the MedQA biomedical question answering task. The state of AI in 2022 — and a half decade in reviewThis report offers an in-depth look at the past five years in the field of artificial intelligence, including statistics on AI adoption by companies, the most popular use cases for AI, and the level of investment in the technology and development in AI, and more… DeepMind’s AlphaCode Conquers Coding, Performing as Well as HumansDeepMind’s AI system has demonstrated impressive results in coding tasks, performing as well as humans on tests with 5,000 participants. This article explains the unique features of this AI and how it is able to achieve such high levels of performance. Is the future of coding already here? Geoffrey Hinton proposes an alternative to backpropagation: the Forward-Forward AlgorithmIn a new paper, Geoffrey Hinton introduces the Forward-Forward (FF) algorithm as an alternative to backpropagation. @martin_gorner has summarized the key points of the paper in a Twitter thread. Hinton argues that, while it is unlikely that the human brain uses backpropagation to learn, FF is a possible alternative, can be very power efficient and is well suited for self-supervised learning. Three 5-minute reads/videos to keep you learning Understanding Convolutions in Probability: A Mad-Science PerspectiveThis article explores the concept of convolutions from a probability perspective, including how to use them, how to compute them, and their mathematical definition. It provides clear examples and a visual map to help simplify the learning process. Some Basic Image Preprocessing Operations for Beginners in PythonIn this article, Rashida discusses some essential image preprocessing operations using OpenCV in Python, including translation, resizing, and cropping. If you are new to image processing and want to learn about these basic techniques for tasks such as image classification, object detection, or optical character recognition, this article is a great resource. This Year’s Most Thought-Provoking Brain DiscoveriesThis article highlights the standout brain discoveries from the Society for Neuroscience meeting in 2022, providing insights into the latest breakthroughs in neural circuits. It’s an interesting read for anyone looking to learn more about the latest developments in the field of neuroscience. Want more? Dive deeper into one of them with the What’s AI weekly! The Learn AI Together Community section! Meme of the week! Meme shared by friedliver#0614 Featured Community post from the Discord altryne#7376 submitted a fun project for the assemblyAI hackathon.ChatGP-T1000 is an AI shape-shifter bot, that assumes identities understands your natural language, and replies with chatGPT responses, in character with deepfaked audio and lipsync. Check it out here and support a fellow community member! You can leave your feedback in the thread here. AI poll of the week! Join the discussion on Discord. TAI Curated section Article of the week Grid Search in Python A-Z: Searching for Perfection by Gencay I. Among many performance-boosting techniques in Machine Learning, the author explains Grid Search & Random Search. In grid search, the algorithm searches the best set of hyperparameters for a machine-learning model over a predefined grid of hyperparameter values. In contrast, random search involves selecting random combinations of hyperparameters and evaluating the model’s performance for each combination. Our must-read articles Support Vector Machines by Data Science meets Cyber Security ChatGPT: How Does It Work Internally? by Patrick Meyer If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to […]