Writing Center Home Page

OASIS: Writing Center

Writing a paper: overview, writing a paper.

The Walden Writing Center staff is dedicated to ensuring your transition to a writing intensive program is a smooth one. In the pages listed on the left you will find all the information you need to master the craft of scholarly writing.

If you are new to scholarly writing, it may be helpful to remember that writing is a process, not an event. Some common steps in the life cycle of a writing project are outlined below. Click each step to access information and resources on that topic, or navigate through our interactive "Life Cycle of a Paper" project at the bottom of the page.

Good luck with your scholarly writing, and if you have questions, Ask OASIS .

Writing Process

Click on each step to find out more about each part of the writing process.

Crash Course in Scholarly Writing Video Playlist

Note that these videos were created while APA 6 was the style guide edition in use. There may be some examples of writing that have not been updated to APA 7 guidelines.

a paper for writing

a paper for writing

Related Resources

Webinar

Didn't find what you need? Search our website or email us .

Read our website accessibility and accommodation statement .

  • Next Page: Goal Setting
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Grad Coach

How To Write A Research Paper

Step-By-Step Tutorial With Examples + FREE Template

By: Derek Jansen (MBA) | Expert Reviewer: Dr Eunice Rautenbach | March 2024

For many students, crafting a strong research paper from scratch can feel like a daunting task – and rightly so! In this post, we’ll unpack what a research paper is, what it needs to do , and how to write one – in three easy steps. 🙂 

Overview: Writing A Research Paper

What (exactly) is a research paper.

  • How to write a research paper
  • Stage 1 : Topic & literature search
  • Stage 2 : Structure & outline
  • Stage 3 : Iterative writing
  • Key takeaways

Let’s start by asking the most important question, “ What is a research paper? ”.

Simply put, a research paper is a scholarly written work where the writer (that’s you!) answers a specific question (this is called a research question ) through evidence-based arguments . Evidence-based is the keyword here. In other words, a research paper is different from an essay or other writing assignments that draw from the writer’s personal opinions or experiences. With a research paper, it’s all about building your arguments based on evidence (we’ll talk more about that evidence a little later).

Now, it’s worth noting that there are many different types of research papers , including analytical papers (the type I just described), argumentative papers, and interpretative papers. Here, we’ll focus on analytical papers , as these are some of the most common – but if you’re keen to learn about other types of research papers, be sure to check out the rest of the blog .

With that basic foundation laid, let’s get down to business and look at how to write a research paper .

Research Paper Template

Overview: The 3-Stage Process

While there are, of course, many potential approaches you can take to write a research paper, there are typically three stages to the writing process. So, in this tutorial, we’ll present a straightforward three-step process that we use when working with students at Grad Coach.

These three steps are:

  • Finding a research topic and reviewing the existing literature
  • Developing a provisional structure and outline for your paper, and
  • Writing up your initial draft and then refining it iteratively

Let’s dig into each of these.

Need a helping hand?

a paper for writing

Step 1: Find a topic and review the literature

As we mentioned earlier, in a research paper, you, as the researcher, will try to answer a question . More specifically, that’s called a research question , and it sets the direction of your entire paper. What’s important to understand though is that you’ll need to answer that research question with the help of high-quality sources – for example, journal articles, government reports, case studies, and so on. We’ll circle back to this in a minute.

The first stage of the research process is deciding on what your research question will be and then reviewing the existing literature (in other words, past studies and papers) to see what they say about that specific research question. In some cases, your professor may provide you with a predetermined research question (or set of questions). However, in many cases, you’ll need to find your own research question within a certain topic area.

Finding a strong research question hinges on identifying a meaningful research gap – in other words, an area that’s lacking in existing research. There’s a lot to unpack here, so if you wanna learn more, check out the plain-language explainer video below.

Once you’ve figured out which question (or questions) you’ll attempt to answer in your research paper, you’ll need to do a deep dive into the existing literature – this is called a “ literature search ”. Again, there are many ways to go about this, but your most likely starting point will be Google Scholar .

If you’re new to Google Scholar, think of it as Google for the academic world. You can start by simply entering a few different keywords that are relevant to your research question and it will then present a host of articles for you to review. What you want to pay close attention to here is the number of citations for each paper – the more citations a paper has, the more credible it is (generally speaking – there are some exceptions, of course).

how to use google scholar

Ideally, what you’re looking for are well-cited papers that are highly relevant to your topic. That said, keep in mind that citations are a cumulative metric , so older papers will often have more citations than newer papers – just because they’ve been around for longer. So, don’t fixate on this metric in isolation – relevance and recency are also very important.

Beyond Google Scholar, you’ll also definitely want to check out academic databases and aggregators such as Science Direct, PubMed, JStor and so on. These will often overlap with the results that you find in Google Scholar, but they can also reveal some hidden gems – so, be sure to check them out.

Once you’ve worked your way through all the literature, you’ll want to catalogue all this information in some sort of spreadsheet so that you can easily recall who said what, when and within what context. If you’d like, we’ve got a free literature spreadsheet that helps you do exactly that.

Don’t fixate on an article’s citation count in isolation - relevance (to your research question) and recency are also very important.

Step 2: Develop a structure and outline

With your research question pinned down and your literature digested and catalogued, it’s time to move on to planning your actual research paper .

It might sound obvious, but it’s really important to have some sort of rough outline in place before you start writing your paper. So often, we see students eagerly rushing into the writing phase, only to land up with a disjointed research paper that rambles on in multiple

Now, the secret here is to not get caught up in the fine details . Realistically, all you need at this stage is a bullet-point list that describes (in broad strokes) what you’ll discuss and in what order. It’s also useful to remember that you’re not glued to this outline – in all likelihood, you’ll chop and change some sections once you start writing, and that’s perfectly okay. What’s important is that you have some sort of roadmap in place from the start.

You need to have a rough outline in place before you start writing your paper - or you’ll end up with a disjointed research paper that rambles on.

At this stage you might be wondering, “ But how should I structure my research paper? ”. Well, there’s no one-size-fits-all solution here, but in general, a research paper will consist of a few relatively standardised components:

  • Introduction
  • Literature review
  • Methodology

Let’s take a look at each of these.

First up is the introduction section . As the name suggests, the purpose of the introduction is to set the scene for your research paper. There are usually (at least) four ingredients that go into this section – these are the background to the topic, the research problem and resultant research question , and the justification or rationale. If you’re interested, the video below unpacks the introduction section in more detail. 

The next section of your research paper will typically be your literature review . Remember all that literature you worked through earlier? Well, this is where you’ll present your interpretation of all that content . You’ll do this by writing about recent trends, developments, and arguments within the literature – but more specifically, those that are relevant to your research question . The literature review can oftentimes seem a little daunting, even to seasoned researchers, so be sure to check out our extensive collection of literature review content here .

With the introduction and lit review out of the way, the next section of your paper is the research methodology . In a nutshell, the methodology section should describe to your reader what you did (beyond just reviewing the existing literature) to answer your research question. For example, what data did you collect, how did you collect that data, how did you analyse that data and so on? For each choice, you’ll also need to justify why you chose to do it that way, and what the strengths and weaknesses of your approach were.

Now, it’s worth mentioning that for some research papers, this aspect of the project may be a lot simpler . For example, you may only need to draw on secondary sources (in other words, existing data sets). In some cases, you may just be asked to draw your conclusions from the literature search itself (in other words, there may be no data analysis at all). But, if you are required to collect and analyse data, you’ll need to pay a lot of attention to the methodology section. The video below provides an example of what the methodology section might look like.

By this stage of your paper, you will have explained what your research question is, what the existing literature has to say about that question, and how you analysed additional data to try to answer your question. So, the natural next step is to present your analysis of that data . This section is usually called the “results” or “analysis” section and this is where you’ll showcase your findings.

Depending on your school’s requirements, you may need to present and interpret the data in one section – or you might split the presentation and the interpretation into two sections. In the latter case, your “results” section will just describe the data, and the “discussion” is where you’ll interpret that data and explicitly link your analysis back to your research question. If you’re not sure which approach to take, check in with your professor or take a look at past papers to see what the norms are for your programme.

Alright – once you’ve presented and discussed your results, it’s time to wrap it up . This usually takes the form of the “ conclusion ” section. In the conclusion, you’ll need to highlight the key takeaways from your study and close the loop by explicitly answering your research question. Again, the exact requirements here will vary depending on your programme (and you may not even need a conclusion section at all) – so be sure to check with your professor if you’re unsure.

Step 3: Write and refine

Finally, it’s time to get writing. All too often though, students hit a brick wall right about here… So, how do you avoid this happening to you?

Well, there’s a lot to be said when it comes to writing a research paper (or any sort of academic piece), but we’ll share three practical tips to help you get started.

First and foremost , it’s essential to approach your writing as an iterative process. In other words, you need to start with a really messy first draft and then polish it over multiple rounds of editing. Don’t waste your time trying to write a perfect research paper in one go. Instead, take the pressure off yourself by adopting an iterative approach.

Secondly , it’s important to always lean towards critical writing , rather than descriptive writing. What does this mean? Well, at the simplest level, descriptive writing focuses on the “ what ”, while critical writing digs into the “ so what ” – in other words, the implications. If you’re not familiar with these two types of writing, don’t worry! You can find a plain-language explanation here.

Last but not least, you’ll need to get your referencing right. Specifically, you’ll need to provide credible, correctly formatted citations for the statements you make. We see students making referencing mistakes all the time and it costs them dearly. The good news is that you can easily avoid this by using a simple reference manager . If you don’t have one, check out our video about Mendeley, an easy (and free) reference management tool that you can start using today.

Recap: Key Takeaways

We’ve covered a lot of ground here. To recap, the three steps to writing a high-quality research paper are:

  • To choose a research question and review the literature
  • To plan your paper structure and draft an outline
  • To take an iterative approach to writing, focusing on critical writing and strong referencing

Remember, this is just a b ig-picture overview of the research paper development process and there’s a lot more nuance to unpack. So, be sure to grab a copy of our free research paper template to learn more about how to write a research paper.

You Might Also Like:

Referencing in Word

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • College University and Postgraduate
  • Academic Writing
  • Research Papers

How to Write a Paper

Last Updated: November 27, 2022 Approved

This article was co-authored by Matthew Snipp, PhD . C. Matthew Snipp is the Burnet C. and Mildred Finley Wohlford Professor of Humanities and Sciences in the Department of Sociology at Stanford University. He is also the Director for the Institute for Research in the Social Science’s Secure Data Center. He has been a Research Fellow at the U.S. Bureau of the Census and a Fellow at the Center for Advanced Study in the Behavioral Sciences. He has published 3 books and over 70 articles and book chapters on demography, economic development, poverty and unemployment. He is also currently serving on the National Institute of Child Health and Development’s Population Science Subcommittee. He holds a Ph.D. in Sociology from the University of Wisconsin—Madison. There are 12 references cited in this article, which can be found at the bottom of the page. wikiHow marks an article as reader-approved once it receives enough positive feedback. In this case, 100% of readers who voted found the article helpful, earning it our reader-approved status. This article has been viewed 320,518 times.

Whether you’re in high school or university, writing papers is probably a big part of your grade for at least some of your classes. Writing an essay on any topic can be challenging and time consuming. But, when you know how to break it down into parts and write each of those parts, it’s much easier! Follow the steps in this article for help writing your next paper from start to finish.

Pre-Writing

Step 1 Choose a topic and research it.

  • If you have an idea for a topic that isn’t listed, feel free to ask your instructor if it would be okay to write about something that isn’t on the list they provided.
  • In some cases, the teacher or professor might just provide an assignment sheet covering the logistics of the paper, but leave the topic choice up to you. If this happens, it can be helpful to come up with a short list of ideas on your own, then choose the best one.
  • Don’t hesitate to ask your instructor for guidance on choosing a topic if you’re having trouble deciding.

Step 2 Start by analyzing primary sources and looking for points to argue.

  • Note that there are different types of papers including research papers, opinion papers, and analytical essays. All of them need a thesis statement and all of them require you to do research and review various sources in order to write them.

Matthew Snipp, PhD

  • Keep in mind that a thesis is not a topic, a fact, or an opinion. It is an argument based on observations and findings that you are trying to prove in your paper, like a hypothesis statement in a science experiment.

Step 3 Write a brief thesis statement that tells readers what you’re arguing.

  • An example of a thesis statement for a research paper is: “The Soviet Union collapsed because of the ruling class’s inability to tackle the economic problems of the common people.” This tells the reader what point you are going to back up with evidence in the rest of your paper.
  • A thesis statement for an opinion paper might read something like: “Libraries are an essential community resource and as such should receive more funding from local municipal governments.”
  • An analytical essay’s thesis statement could be: “JD Salinger makes heavy use of symbolism in The Catcher in the Rye in order to create feelings of melancholy and uncertainty in the novel.”

Step 4 Make a list of major points to support your thesis as an outline.

  • For example, if your thesis is about why the government needs to do more to protect wetland ecosystems, main supporting points could be: “effects of wetland loss in the US,” “current lack of laws protecting wetlands,” and “benefits of saving wetlands.”
  • These major points form the body of your paper, in between your introduction and your conclusion.

Step 5 Write supporting ideas and arguments under each major point.

  • For example, under a main point that says “employment conditions affect the mental health of workers,” your sub-points might be: “high levels of stress are directly related to mental health” and “workers in low-skill positions tend to have higher levels of stress.”

Step 1 Start the introduction...

  • For example, you could write something like: “Did you know that cattle ranching is the leading cause of deforestation in the Amazon rainforest?”

Step 2 State the specific topic of your paper.

  • For example, you might write something like: “It’s common for apps and social media to be demonized as a waste of time and brain space, but not all such technology should be considered mindless entertainment. In fact, many apps and social media networks can be used for educational and academic purposes.”

Step 3 End the intro with your thesis statement.

  • Make sure the background information about your topic that you include in your intro flows nicely into your thesis.

Step 4 Discuss your major points in detail in the body of your paper.

  • Think of each paragraph as kind of a mini essay in and of itself. Each paragraph should be a self-contained chunk of information that relates to the overall topic and thesis of your paper.
  • Supporting evidence can be things like statistics, data, facts, and quotes from your sources

Step 5 Connect your body paragraphs in a logical way.

  • For example, put a paragraph about the reasons behind the collapse of the Soviet Union before a paragraph about the changes to Eastern European societies in the 90s, because the collapse of the Soviet Union directly led to many of those changes.
  • If your first body paragraph discusses the extent of deforestation in the Amazon over the past decade, and your second paragraph is going to explain how that affects animal extinction, state the shift in focus by writing something like: “The deforestation of the Amazon over the last decade has resulted in a drastic reduction of natural habitats for many species.”

Step 6 Start your conclusion...

  • For example, if your thesis in your intro was “The use of technology can benefit children because it improves developmental skills,” restate it something like this: “The use of technology contributes to children’s well-rounded development from a young age.”

Step 7 Sum up your main supporting points and how they support your argument.

  • For example, you might write something like: “Deforestation is directly linked to climate change and increasingly extreme weather across the world, which is why global governments must take more action to stop illegal logging.”

Step 8 End by stating what the significance of your argument is.

  • For example, say something like: “Ignoring the realities of deforestation and climate change has grave implications for all of us. If we don’t start putting more pressure on governments to act, your children or grandchildren will be living in a very different world from that which we inhabit today.”

Citing Sources

Step 1 Write an MLA-style...

  • Humanities subjects include language arts and cultural studies.
  • Note that these are just the basic rules for writing an MLA-style works cited page. For a full list of rules regarding all things MLA and citations, refer to an MLA handbook.
  • Note that there is some crossover between certain subjects, in which case more than 1 style of works cited or reference page may be acceptable.
  • Always review your assignment rubric or ask your professor which style of citations they prefer before writing your reference or works cited page.

Step 2 Cite references in...

  • Social sciences include psychology, sociology, and anthropology.
  • Refer to an APA style guide for complete rules about how to list different types of sources.

Step 3 Cite sources in...

  • Check a Chicago Manual of Style for more specific instructions about citations.
  • Note that Chicago style is more commonly used for published works. If you’re a student, your professor might instruct you to use MLA format for your papers.

Revising, Editing, and Proofreading

Step 1 Analyze your paper and eliminate unnecessary information.

  • Anywhere from a few hours to a day is a good amount of time to wait before you start revising your paper. The point is to come back to it with a fresh set of eyes.
  • If you can, get a roommate, a family member, a friend, or a classmate to read your paper too. Ask them for advice on ways you could make your argument and your evidence more clear or relevant.
  • Here are 3 questions to ask yourself as you read each sentence and piece of information in your paper: Is this really worth saying? Does this say what I want it to say? Will readers understand what I’m saying?

Step 2 Tighten and clean up the language.

  • It helps to read your paper out loud as you do this. Listen for awkward pauses, phrases, and sentence structure, and revise them so the writing flows better.
  • Try copying and pasting your essay into the free online tool called “Hemingway.” The app suggests many different ways to make your writing clearer, more direct, and more readable.

Step 3 Edit for repetition and look for better words to use.

  • Take this sentence as an example: “The collapse of the Soviet Union resulted in the collapse of local governments and economies across Eastern Europe.” Instead of using “collapse” twice, replace the second instance with “crumbling.”

Step 4 Proofread for spelling, grammar, and punctuation errors.

  • It’s also a good idea to paste your paper into a third-party tool, like Grammarly, for a final spelling and grammar check. Not every program catches everything, so it’s better to be on the safe side!

Sample Research Papers

a paper for writing

Sample Essays

a paper for writing

Expert Q&A

Matthew Snipp, PhD

You Might Also Like

Write an Essay

  • ↑ https://owl.purdue.edu/owl/general_writing/common_writing_assignments/research_papers/choosing_a_topic.html
  • ↑ https://writingcenter.fas.harvard.edu/pages/developing-thesis
  • ↑ Matthew Snipp, PhD. Research Fellow, U.S. Bureau of the Census. Expert Interview. 26 March 2020.
  • ↑ https://library.piedmont.edu/c.php?g=521348&p=3564584
  • ↑ https://academicguides.waldenu.edu/writingcenter/writingprocess/outlining
  • ↑ https://writingcenter.uagc.edu/introductions-conclusions
  • ↑ https://library.leeds.ac.uk/info/14011/writing/112/essay_writing/6
  • ↑ https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_works_cited_page_basic_format.html
  • ↑ https://aut.ac.nz.libguides.com/APA6th/referencelist
  • ↑ https://owl.purdue.edu/owl/research_and_citation/chicago_manual_17th_edition/cmos_formatting_and_style_guide/general_format.html
  • ↑ https://owl.purdue.edu/owl/general_writing/the_writing_process/proofreading/steps_for_revising.html
  • ↑ https://writingcenter.unc.edu/tips-and-tools/revising-drafts/

About This Article

Matthew Snipp, PhD

To write a paper, review the assignment sheet and rubric, and begin your research. Decide what you want to argue in your paper, and form it into your thesis statement, which is a sentence that sums up your argument and main points. Make an outline of the argument, and then start writing the introduction to the paper, which grabs your reader's attention and states the thesis. Then, include at least 3 paragraphs of supporting evidence for your argument, which makes up the body of the paper. Finally, end the paper with a conclusion that wraps up your points and restate your thesis. For tips from our academic reviewer on refining your paper and making a strong argument, read on! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Anonymous

Jun 6, 2016

Did this article help you?

a paper for writing

Reza Siahaan

Nov 30, 2016

Dec 15, 2016

Juliet Janeway

Juliet Janeway

Jun 23, 2016

Logan Markland

Logan Markland

Feb 21, 2017

Am I a Narcissist or an Empath Quiz

Featured Articles

Convince Your Parents

Trending Articles

8 Reasons Why Life Sucks & 15 Ways to Deal With It

Watch Articles

Fold Boxer Briefs

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Develop the tech skills you need for work and life

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

APA Sample Paper

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Note:  This page reflects the latest version of the APA Publication Manual (i.e., APA 7), which released in October 2019. The equivalent resource for the older APA 6 style  can be found here .

Media Files: APA Sample Student Paper  ,  APA Sample Professional Paper

This resource is enhanced by Acrobat PDF files. Download the free Acrobat Reader

Note: The APA Publication Manual, 7 th Edition specifies different formatting conventions for student  and  professional  papers (i.e., papers written for credit in a course and papers intended for scholarly publication). These differences mostly extend to the title page and running head. Crucially, citation practices do not differ between the two styles of paper.

However, for your convenience, we have provided two versions of our APA 7 sample paper below: one in  student style and one in  professional  style.

Note: For accessibility purposes, we have used "Track Changes" to make comments along the margins of these samples. Those authored by [AF] denote explanations of formatting and [AWC] denote directions for writing and citing in APA 7. 

APA 7 Student Paper:

Apa 7 professional paper:.

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

By Steven Levy

8 Google Employees Invented Modern AI. Here’s the Inside Story

Eight names are listed as authors on “Attention Is All You Need,” a scientific paper written in the spring of 2017. They were all Google researchers, though by then one had left the company. When the most tenured contributor, Noam Shazeer, saw an early draft, he was surprised that his name appeared first, suggesting his contribution was paramount. “I wasn’t thinking about it,” he says.

Image may contain Art Drawing Adult Person Face Head Photography and Portrait

It’s always a delicate balancing act to figure out how to list names—who gets the coveted lead position, who’s shunted to the rear. Especially in a case like this one, where each participant left a distinct mark in a true group effort. As the researchers hurried to finish their paper, they ultimately decided to “sabotage” the convention of ranking contributors. They added an asterisk to each name and a footnote: “Equal contributor,” it read. “Listing order is random.” The writers sent the paper off to a prestigious artificial intelligence conference just before the deadline—and kicked off a revolution.

Approaching its seventh anniversary, the “Attention” paper has attained legendary status. The authors started with a thriving and improving technology—a variety of AI called neural networks—and made it into something else: a digital system so powerful that its output can feel like the product of an alien intelligence . Called transformers, this architecture is the not-so-secret sauce behind all those mind-blowing AI products , including ChatGPT and graphic generators such as Dall-E and Midjourney. Shazeer now jokes that if he knew how famous the paper would become, he “might have worried more about the author order.” All eight of the signers are now microcelebrities. “I have people asking me for selfies—because I’m on a paper!” says Llion Jones, who is (randomly, of course) name number five.

Image may contain Art Drawing Adult Person Accessories Glasses Face Head Photography and Portrait

“Without transformers I don’t think we’d be here now,” says Geoffrey Hinton , who is not one of the authors but is perhaps the world’s most prominent AI scientist . He’s referring to the ground-shifting times we live in, as OpenAI and other companies build systems that rival and in some cases surpass human output.

All eight authors have since left Google. Like millions of others, they are now working in some way with systems powered by what they created in 2017. I talked to the Transformer Eight to piece together the anatomy of a breakthrough, a gathering of human minds to create a machine that might well save the last word for itself.

Image may contain Art Drawing Adult Person Face Head Photography and Portrait

The story of transformers begins with the fourth of the eight names: Jakob Uszkoreit.

Uszkoreit is the son of Hans Uszkoreit, a well-known computational linguist. As a high school student in the late 1960s, Hans was imprisoned for 15 months in his native East Germany for protesting the Soviet invasion of Czechoslovakia. After his release, he escaped to West Germany and studied computers and linguistics in Berlin. He made his way to the US and was working in an artificial intelligence lab at SRI, a research institute in Menlo Park, California, when Jakob was born. The family eventually returned to Germany, where Jakob went to university. He didn’t intend to focus on language, but as he was embarking on graduate studies, he took an internship at Google in its Mountain View office, where he landed in the company’s translation group. He was in the family business. He abandoned his PhD plans and, in 2012, decided to join a team at Google that was working on a system that could respond to users’ questions on the search page itself without diverting them to other websites. Apple had just announced Siri, a virtual assistant that promised to deliver one-shot answers in casual conversation, and the Google brass smelled a huge competitive threat: Siri could eat up their search traffic. They started paying a lot more attention to Uszkoreit’s new group.

“It was a false panic,” Uszkoreit says. Siri never really threatened Google. But he welcomed the chance to dive into systems where computers could engage in a kind of dialog with us. At the time, recurrent neural networks—once an academic backwater—had suddenly started outperforming other methods of AI engineering. The networks consist of many layers, and information is passed and repassed through those layers to identify the best responses. Neural nets were racking up huge wins in fields such as image recognition, and an AI renaissance was suddenly underway. Google was frantically rearranging its workforce to adopt the techniques. The company wanted systems that could churn out humanlike responses—to auto-complete sentences in emails or create relatively simple customer service chatbots.

The Earth Will Feast on Dead Cicadas

Will Knight

The Deaths of Effective Altruism

But the field was running into limitations. Recurrent neural networks struggled to parse longer chunks of text. Take a passage like Joe is a baseball player, and after a good breakfast he went to the park and got two hits. To make sense of “two hits,” a language model has to remember the part about baseball. In human terms, it has to be paying attention. The accepted fix was something called “long short-term memory” (LSTM), an innovation that allowed language models to process bigger and more complex sequences of text. But the computer still handled those sequences strictly sequentially—word by tedious word—and missed out on context clues that might appear later in a passage. “The methods we were applying were basically Band-Aids,” Uszkoreit says. “We could not get the right stuff to really work at scale.”

Around 2014, he began to concoct a different approach that he referred to as self-attention. This kind of network can translate a word by referencing any other part of a passage. Those other parts can clarify a word’s intent and help the system produce a good translation. “It actually considers everything and gives you an efficient way of looking at many inputs at the same time and then taking something out in a pretty selective way,” he says. Though AI scientists are careful not to confuse the metaphor of neural networks with the way the biological brain actually works, Uszkoreit does seem to believe that self-attention is somewhat similar to the way humans process language.

Uszkoreit thought a self-attention model could potentially be faster and more effective than recurrent neural nets. The way it handles information was also perfectly suited to the powerful parallel processing chips that were being produced en masse to support the machine learning boom. Instead of using a linear approach (look at every word in sequence), it takes a more parallel one (look at a bunch of them together). If done properly, Uszkoreit suspected, you could use self-attention exclusively to get better results.

Not everyone thought this idea was going to rock the world, including Uszkoreit’s father, who had scooped up two Google Faculty research awards while his son was working for the company. “People raised their eyebrows, because it dumped out all the existing neural architectures,” Jakob Uszkoreit says. Say goodbye to recurrent neural nets? Heresy! “From dinner-table conversations I had with my dad, we weren’t necessarily seeing eye to eye.”

Uszkoreit persuaded a few colleagues to conduct experiments on self-attention. Their work showed promise, and in 2016 they published a paper about it. Uszkoreit wanted to push their research further—the team’s experiments used only tiny bits of text—but none of his collaborators were interested. Instead, like gamblers who leave the casino with modest winnings, they went off to apply the lessons they had learned. “The thing worked ,” he says. “The folks on that paper got excited about reaping the rewards and deploying it in a variety of different places at Google, including search and, eventually, ads. It was an amazing success in many ways, but I didn’t want to leave it there.”

Uszkoreit felt that self-attention could take on much bigger tasks. There’s another way to do this , he’d argue to anyone who would listen, and some who wouldn’t, outlining his vision on whiteboards in Building 1945, named after its address on Charleston Road on the northern edge of the Google campus.

Image may contain Art Drawing Face Head Person Photography Portrait and Adult

One day in 2016, Uszkoreit was having lunch in a Google café with a scientist named Illia Polosukhin. Born in Ukraine, Polosukhin had been at Google for nearly three years. He was assigned to the team providing answers to direct questions posed in the search field. It wasn’t going all that well. “To answer something on Google.com, you need something that’s very cheap and high-performing,” Polosukhin says. “Because you have milliseconds” to respond. When Polosukhin aired his complaints, Uszkoreit had no problem coming up with a remedy. “He suggested, why not use self-attention?” says Polosukhin.

Polosukhin sometimes collaborated with a colleague named Ashish Vaswani. Born in India and raised mostly in the Middle East, he had gone to the University of Southern California to earn his doctorate in the school’s elite machine translation group. Afterward, he moved to Mountain View to join Google—specifically a newish organization called Google Brain . He describes Brain as “a radical group” that believed “neural networks were going to advance human understanding.” But he was still looking for a big project to work on. His team worked in Building 1965 next door to Polosukhin’s language team in 1945, and he heard about the self-attention idea. Could that be the project? He agreed to work on it.

Image may contain Art Drawing Adult Person Face Head Photography and Portrait

Together, the three researchers drew up a design document called “Transformers: Iterative Self-Attention and Processing for Various Tasks.” They picked the name “transformers” from “day zero,” Uszkoreit says. The idea was that this mechanism would transform the information it took in, allowing the system to extract as much understanding as a human might—or at least give the illusion of that. Plus Uszkoreit had fond childhood memories of playing with the Hasbro action figures. “I had two little Transformer toys as a very young kid,” he says. The document ended with a cartoony image of six Transformers in mountainous terrain, zapping lasers at one another.

There was also some swagger in the sentence that began the paper: “We are awesome.”

In early 2017, Polosukhin left Google to start his own company. By then new collaborators were coming onboard. An Indian engineer named Niki Parmar had been working for an American software company in India when she moved to the US. She earned a master’s degree from USC in 2015 and was recruited by all the Big Tech companies. She chose Google. When she started, she joined up with Uszkoreit and worked on model variants to improve Google search.

Image may contain Art Drawing Face Head Person Photography Portrait and Adult

Another new member was Llion Jones. Born and raised in Wales, he loved computers “because it was not normal.” At the University of Birmingham he took an AI course and got curious about neural networks, which were presented as a historical curiosity. He got his master’s in July 2009 and, unable to find a job during the recession, lived on the dole for months. He found a job at a local company and then applied to Google as a “hail Mary.” He got the gig and eventually landed in Google Research, where his manager was Polosukhin. One day, Jones heard about the concept of self-attention from a fellow worker named Mat Kelcey, and he later joined up with Team Transformers. (Later, Jones ran into Kelcey and briefed him on the transformer project. Kelcey wasn’t buying it. “I told him, ‘I’m not sure that’s going to work,’ which is basically the biggest incorrect prediction of my life,” Kelcey says now.)

The transformer work drew in other Google Brain researchers who were also trying to improve large language models . This third wave included Łukasz Kaiser, a Polish-born theoretical computer scientist, and his intern, Aidan Gomez. Gomez had grown up in a small farming village in Ontario, Canada, where his family would tap maple trees every spring for syrup. As a junior at the University of Toronto, he “fell in love” with AI and joined the machine learning group—Geoffrey Hinton’s lab. He began contacting people at Google who had written interesting papers, with ideas for extending their work. Kaiser took the bait and invited him to intern. It wasn’t until months later that Gomez learned those internships were meant for doctoral students, not undergrads like him.

Kaiser and Gomez quickly understood that self-attention looked like a promising, and more radical, solution to the problem they were addressing. “We had a deliberate conversation about whether we wanted to merge the two projects,” says Gomez. The answer was yes.

The transformer crew set about building a self-attention model to translate text from one language to another. They measured its performance using a benchmark called BLEU, which compares a machine’s output to the work of a human translator. From the start, their new model did well. “We had gone from no proof of concept to having something that was at least on par with the best alternative approaches to LSTMs by that time,” Uszkoreit says. But compared to long short-term memory, “it wasn’t better.”

They had reached a plateau—until one day in 2017, when Noam Shazeer heard about their project, by accident. Shazeer was a veteran Googler—he’d joined the company in 2000—and an in-house legend, starting with his work on the company’s early ad system. Shazeer had been working on deep learning for five years and recently had become interested in large language models. But these models were nowhere close to producing the fluid conversations that he believed were possible.

As Shazeer recalls it, he was walking down a corridor in Building 1965 and passing Kaiser’s workspace. He found himself listening to a spirited conversation. “I remember Ashish was talking about the idea of using self-attention, and Niki was very excited about it. I’m like, wow, that sounds like a great idea. This looks like a fun, smart group of people doing something promising.” Shazeer found the existing recurrent neural networks “irritating” and thought: “Let’s go replace them!”

Shazeer’s joining the group was critical. “These theoretical or intuitive mechanisms, like self-attention, always require very careful implementation, often by a small number of experienced ‘magicians,’ to even show any signs of life,” says Uszkoreit. Shazeer began to work his sorcery right away. He decided to write his own version of the transformer team’s code. “I took the basic idea and made the thing up myself,” he says. Occasionally he asked Kaiser questions, but mostly, he says, he “just acted on it for a while and came back and said, ‘Look, it works.’” Using what team members would later describe with words like “magic” and “alchemy” and “bells and whistles,” he had taken the system to a new level.

“That kicked off a sprint,” says Gomez. They were motivated, and they also wanted to hit an upcoming deadline—May 19, the filing date for papers to be presented at the biggest AI event of the year, the Neural Information Processing Systems conference in December. As what passes for winter in Silicon Valley shifted to spring, the pace of the experiments picked up. They tested two models of transformers: one that was produced with 12 hours of training and a more powerful version called Big that was trained over three and a half days. They set them to work on English-to-German translation.

The basic model outperformed all competitors—and Big earned a BLEU score that decisively shattered previous records while also being more computationally efficient. “We had done it in less time than anyone out there,” Parmar says. “And that was only the beginning, because the number kept improving.” When Uszkoreit heard this, he broke out an old bottle of champagne he had lying around in his mountain expedition truck.

The last two weeks before the deadline were frantic. Though officially some of the team still had desks in Building 1945, they mostly worked in 1965 because it had a better espresso machine in the micro-kitchen. “People weren’t sleeping,” says Gomez, who, as the intern, lived in a constant debugging frenzy and also produced some diagrams for the paper. It’s common in such projects to do ablations—taking things out to see whether what remains is enough to get the job done.

“There was every possible combination of tricks and modules—which one helps, which doesn’t help. Let’s rip it out. Let’s replace it with this,” Gomez says. “Why is the model behaving in this counterintuitive way? Oh, it’s because we didn’t remember to do the masking properly. Does it work yet? OK, move on to the next. All of these components of what we now call the transformer were the output of this extremely high-paced, iterative trial and error.” The ablations, aided by Shazeer’s implementations, produced “something minimalistic,” Jones says. “Noam is a wizard.”

Vaswani recalls crashing on an office couch one night while the team was writing the paper. As he stared at the curtains that separated the couch from the rest of the room, he was struck by the pattern on the fabric, which looked to him like synapses and neurons. Gomez was there, and Vaswani told him that what they were working on would transcend machine translation. “Ultimately, like with the human brain, you need to unite all these modalities—speech, audio, vision—under a single architecture,” he says. “I had a strong hunch we were onto something more general.”

In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit. “And it caused us to actually obsess over one of the sentences in the paper toward the end, where we comment on future work.”

That sentence anticipated what might come next—the application of transformer models to basically all forms of human expression. “We are excited about the future of attention-based models,” they wrote. “We plan to extend the transformer to problems involving input and output modalities other than text” and to investigate “images, audio and video.”

A couple of nights before the deadline, Uszkoreit realized they needed a title. Jones noted that the team had landed on a radical rejection of the accepted best practices, most notably LSTMs, for one technique: attention. The Beatles, Jones recalled, had named a song “All You Need Is Love.” Why not call the paper “Attention Is All You Need”?

The Beatles?

“I’m British,” says Jones. “It literally took five seconds of thought. I didn’t think they would use it.”

They continued collecting results from their experiments right up until the deadline. “The English-French numbers came, like, five minutes before we submitted the paper,” says Parmar. “I was sitting in the micro-kitchen in 1965, getting that last number in.” With barely two minutes to spare, they sent off the paper.

Google, as almost all tech companies do, quickly filed provisional patents on the work. The reason was not to block others from using the ideas but to build up its patent portfolio for defensive purposes. (The company has a philosophy of “if technology advances, Google will reap the benefits.”)

When the transformer crew heard back from the conference peer reviewers, the response was a mix. “One was positive, one was extremely positive, and one was, ‘This is OK,’” says Parmar. The paper was accepted for one of the evening poster sessions.

By December, the paper was generating a buzz. Their four-hour session on December 6 was jammed with scientists wanting to know more. The authors talked until they were hoarse. By 10:30 pm, when the session closed, there was still a crowd. “Security had to tell us to leave,” says Uszkoreit. Perhaps the most satisfying moment for him was when computer scientist Sepp Hochreiter came up and praised the work—quite a compliment, considering that Hochreiter was the coinventor of long short-term memory, which transformers had just booted as the go-to hammer in the AI toolkit.

Transformers did not instantly take over the world, or even Google. Kaiser recalls that around the time of the paper’s publication, Shazeer proposed to Google executives that the company abandon the entire search index and train a huge network with transformers—basically to transform how Google organizes information. At that point, even Kaiser considered the idea ridiculous. Now the conventional wisdom is that it’s a matter of time .

A startup called OpenAI was much faster to pounce . Soon after the paper was published, OpenAI’s chief researcher, Ilya Sutskever—who had known the transformer team during his time at Google—suggested that one of its scientists, Alec Radford, work on the idea. The results were the first GPT products. As OpenAI CEO Sam Altman told me last year, “When the transformer paper came out, I don’t think anyone at Google realized what it meant.”

The picture internally is more complicated. “It was pretty evident to us that transformers could do really magical things,” says Uszkoreit. “Now, you may ask the question, why wasn’t there ChatGPT by Google back in 2018? Realistically, we could have had GPT-3 or even 3.5 probably in 2019, maybe 2020. The big question isn’t, did they see it? The question is, why didn’t we do anything with the fact that we had seen it? The answer is tricky.”

Image may contain Art Drawing Adult Person Face Head Photography and Portrait

Many tech critics point to Google’s transition from an innovation-centered playground to a bottom-line-focused bureaucracy. As Gomez told the Financial Times , “They weren’t modernizing. They weren’t adopting this tech.” But that would have taken a lot of daring for a giant company whose technology led the industry and reaped huge profits for decades. Google did begin to integrate transformers into products in 2018, starting with its translation tool. Also that year, it introduced a new transformer-based language model called BERT, which it started to apply to search the year after.

But these under-the-hood changes seem timid compared to OpenAI’s quantum leap and Microsoft’s bold integration of transformer-based systems into its product line. When I asked CEO Sundar Pichai last year why his company wasn’t first to launch a large language model like ChatGPT, he argued that in this case Google found it advantageous to let others lead. “It’s not fully clear to me that it might have worked out as well. The fact is, we can do more after people had seen how it works,” he said.

There is the undeniable truth that all eight authors of the paper have left Google. Polosukhin’s company, Near, built a blockchain whose tokens have a market capitalization around $4 billion. Parmar and Vaswani paired up as business partners in 2021 to start Adept (estimated valuation of $1 billion) and are now on their second company, called Essential AI ($8 million in funding). Llion Jones’ Tokyo-based Sakana AI is valued at $200 million. Shazeer, who left in October 2021, cofounded Character AI (estimated valuation of $5 billion). Aidan Gomez, the intern in the group, cofounded Cohere in Toronto in 2019 (estimated valuation of $2.2 billion). Jakob Uszkoreit’s biotech company, Inceptive, is valued at $300 million. All those companies (except Near) are based on transformer technology.

Image may contain Art Drawing Face Head Person Photography and Portrait

Kaiser is the only one who hasn’t founded a company. He joined OpenAI and is one of the inventors of a new technology called Q* , which Altman said last year will “push the veil of ignorance back and the frontier of discovery forward.” (When I attempted to quiz Kaiser on this in our interview, the OpenAI PR person almost leaped across the table to silence him.)

Does Google miss these escapees? Of course, in addition to others who have migrated from the company to new AI startups. (Pichai reminded me, when I asked him about the transformer departures, that industry darling OpenAI also has seen defections: “The AI area is very, very dynamic,” he said.) But Google can boast that it created an environment that supported the pursuit of unconventional ideas. “In a lot of ways Google has been way ahead—they invested in the right minds and created the environment where we could explore and push the envelope,” Parmar says. “It’s not crazy that it took time to adopt it. Google had so much more at stake.”

Without that environment: no transformer. Not only were the authors all Google employees, they also worked out of the same offices. Hallway encounters and overheard lunch conversations led to big moments. The group is also culturally diverse. Six of the eight authors were born outside the United States; the other two are children of two green-card-carrying Germans who were temporarily in California and a first-generation American whose family had fled persecution, respectively.

Uszkoreit, speaking from his office in Berlin, says that innovation is all about the right conditions. “It’s getting people who are super excited about something who are at the right point in their life,” he says. “If you have that and have fun while you do it, and you’re working on the right problems—and you’re lucky—the magic happens.”

Something magical also happened between Uszkoreit and his famous father. After all those dinner table debates, Hans Uszkoreit, his son reports, has now cofounded a company that is building large language models. Using transformers, of course.

Let us know what you think about this article. Submit a letter to the editor at [email protected] . Updated 3/21/2024, 10 pm EDT: This article was updated to correct the spelling of Alec Radford's name.

Updated 3/25/2024, noon EST: This article has been updated to clarify Aidan Gomez's contributions to the paper.

You Might Also Like …

In your inbox: Will Knight's Fast Forward explores advances in AI

This shadowy firm enables businesses to operate in near-total secrecy

Scientists are inching closer to bringing back the woolly mammoth

The first rule of the Extreme Dishwasher Loading Facebook group is …

Phones for every budget: These devices stood up to WIRED’s testing

a paper for writing

Robert Peck

Javier Bardem Is Menacing and Thrilling in Dune: Part Two&-and a Soulful Teddy Bear IRL

Hemal Jhaveri

‘$5,000 to Save a Life Is a Bargain’

Virginia Heffernan

The Mayor of London Enters the Bullshit Cinematic Universe

Peter Guest

a paper for writing

By Damien Cave

Damien Cave’s iPhone says he averages around six hours of use per day.

The handwritten letters from our 13-year-old daughter sit on our coffee table in a clear plastic folder. With their drawings of pink flowers and long paragraphs marked with underlined and crossed-out words, they are an abridged, analog version of her spirited personality — and a way for my wife and me to keep her close as we watch TV and fiddle with our phones.

Listen to this article with reporter commentary

Open this article in the New York Times Audio app on iOS.

They would not exist, of course, if Amelia was home with us in Sydney. But she is hundreds of miles away at a uniquely Australian school in the bush, where she is running and hiking dozens of miles a week, sharing chores with classmates, studying only from books and, most miraculously, spending her whole ninth-grade school year without the internet, a phone, a computer or even a camera with a screen.

Our friends and relatives in the United States can hardly believe this is even a possibility. There, it is considered bold just to talk about taking smartphones from students during class time. Here in Australia, a growing number of respected schools lock up smart everything for months. They surround digital natives with nature. They make tap-and-swipe teens learn, play and communicate only through real-life interaction or words scrawled on the page.

“What a gift this is,” we told Amelia, when she was accepted, hesitated, then decided to go.

What I underestimated was how hard it would be for us at home. Removing the liveliest member of our family, without calls or texts, felt like someone had taken one of my internal organs across state lines without telling me how to heal. The silence and hunger to see paper in the mailbox, anything from my girl, spurred nausea and a rush to the Stoics.

Yet as we adjust, her correspondence and ours — traveling hundreds of miles, as if from one era to another — is teaching us all more than we’d imagined. The gift of digital detox that we thought Australia was giving our daughter has also become a revelatory bequest for us — her American parents and her older brother.

Something in the act of writing, sending and waiting days or weeks for a reply, and in the physical and social challenges experienced by our daughter at a distance, is changing all of our personal operating systems. Without the ever-present immediacy of digital connection, even just temporarily, can a family be rewired?

Amelia is at Timbertop, the ninth-grade campus of Geelong Grammar, one of Australia’s oldest private schools, which has made outdoor education a priority since the 1950s. The headmaster at the time, James Darling, was inspired by Outward Bound, a movement birthed in Europe before World War II that aimed to build competence and confidence. But rather than tack on an adventure for a few days or weeks — as such programs generally do in the United States — Mr. Darling Australianized the idea and made it residential.

Geelong bought a huge tract of rural land in the state of Victoria, at the base of Mount Timbertop, in 1951. Students helped build some of the rustic cabins where my daughter and her classmates now live — cabins where hot showers happen only if they chop wood and fire it up in an old-fashioned boiler. The idea was to build courage, curiosity and compassion among adolescents, and their ranks have ranged from the children of sheep farmers and diplomats to a certain angsty member of the British royal family named Charles. The current king of England spent a semester at Timbertop in 1966. He later said it was “by far the best part” of his education.

Many schools have trod a similar path, with analog outposts in the hinterlands. And like a lot of elite schooling, these programs hold up a mirror to national mythology. For Australia, the goal is hardiness, not Harvard: Outdoor ed thrives on a sparsely populated island the size of the continental United States where there is still a deep love for the pastoral, where “mateship” in the face of unexpected hardship lives on in novels and pop culture.

The bush schools of Australia are not cheap — Timbertop costs around $55,000, with room and board, on par with private day schools in New York City, but as steep as it gets in Oz. For regular Geelong students, the experience is compulsory; others must apply and be selected after an interview, yielding a class of 240 boys and girls who have signed up for, along with the usual classes, community service at local farms, winter camping in the snow and, in the final term, a six-day hike, where students plan their own route and are entirely self-sufficient.

The year is meant to be difficult.

Before we dropped Amelia off in late January, we received a video from Timbertop showing teachers sitting at picnic tables in the sun, warning that confidence and personal growth would come only with struggles and perseverance. My wife and I, having grown up when such things could be easily acquired for free, laughed at what felt like a satirical New Age pitch. Thanks for paying lots of money, now get ready to suffer!

Within 24 hours, we started to understand what that meant. Not for Amelia. For us.

The WhatsApp group for parents from Sydney was abuzz with pangs of despair and grief. Gone were the texts asking for a ride or wondering what’s for dinner. The apps we all relied on to chat or to know whether our kids were on the bus were useless. We knew where they all were. But we couldn’t call — even phones sit outside Timbertop asceticism, except in emergencies. Were their cabin mates nice? Were they miserable with all the running, hiking and cleanliness inspections?

A few days in, I also couldn’t avoid tough questions about myself. Was the fact that it was so hard to lose contact a comment on my over-involved parenting? My own ridiculous addiction to tech-fueled immediacy? Or both?

“Withdrawal” was a word we heard discussed in Timbertop, or “TT,” circles. In Amelia’s first letter, arriving after a week that felt like a year, we could certainly see the symptoms. She was anxious about friendships, wanting them to form as quickly as they do on Snapchat. In her Timbertop interview, when asked about homesickness, she had bluntly said “that’s the least of my worries,” but, in fact, Amelia missed us — even her brother. Her early letters to us and to him made clear that she found the intensity of her emotions surprising.

My wife, Diana, and I wrote back right away with encouragement. We scrutinized a school ID photo that appeared on the Geelong website — proof of life! — and spoke to her unit leader, a warm, wonderful teacher charged with monitoring her cabin of 15 girls. She assured us that things would improve when the rhythm of letter writing became more regular.

I was skeptical, but Timbertop seemed to know what it was doing. We had to trust. We had to write.

The last time I’d composed actual letters, it was the late ’90s, and one of my closest friends was in the Peace Corps in Paraguay. We exchanged tales of our exploits on blue paper as thin as tissue that folded up into an envelope to minimize the weight for postage. This time, I mostly typed in Google Docs using the newsletter template so I could easily add photos and, as I told Amelia, create more of a Pinterest vibe. Totally disconnecting and writing by hand — that still felt too slow and out of reach for me.

And yet, among the more fascinating elements of the process has been watching Amelia’s handwriting change. She sent 19 letters home in the first five weeks, from a page to a few, and they show heaps of growth in penmanship. Words have taken clearer shape and fit better together, flowing with her thoughts, delivering humor, fear and a heightened self-awareness that seems to come from long hikes and sitting quietly without electronic distractions.

Her missives still contain common requests from a 13-year-old — send me this or that — and phrases we don’t understand. My favorite moments are the sudden interludes that reveal she’s not alone, but writing the letter at a mandatory letter-writing time in a room with other girls. I almost cried with joy when, between critiquing one particular class, she wrote about her recent hike: “OH MY GOD. The Mt. TT was 1,200 meters high! Just found that out. Crazy.”

Reading that, I felt enormous pride and thought: Maybe it’s the mix of the banal, the deep — and all that is omitted — that makes letters distinct. They pass from our mind in a way that allows for a portrait of the self to emerge that can be more revealing than what we get through electronic media because letters often lack editing, are long enough to justify postage and are run through with holes of subjectivity.

For example, in my early letters to Amelia, I left out details of home because I was consumed by curiosity and concern. I asked a million questions about the food, the weekly schedule, classes, teachers, hiking and chores, because, well, didn’t she want her parents to know?

But every letter we received seemed to veer away from my questions to what she cared and worried about. Two or three weeks in, I offered a bribe — I’d send her a present if she would write to us with the funniest story she had experienced or heard. Even then, it took a while to get an answer, and it was far less satisfying than when she, on her own accord, started sharing smile-inducing tales that included honey poured in shoes, gross dirty dishes, tears while hiking, bribing a boy with snacks to chop wood, falling down a trail and the mysterious reappearance of a lost camping knife.

The experiences she told us about, including the occasional mention of a class in positive psychology to identify personal strengths, spoke to the importance of play and pushing adolescents into environments where they can learn they are far more capable of managing risks and taking on tough tasks than they (or we) might think.

But I was also starting to find value in the retelling, in the slow sharing of our lives by analog means — in the letter writing itself.

Seeking more insight, I reached out to John Marsden, the former head of the English department at Timbertop and a best-selling young adult novelist who later founded his own experiential learning school north of Melbourne.

He laughed when I asked about the meaning of letters.

“It’s been happening for thousands of years,” he said. “It’s just new for this generation.”

After a bit of joking at my expense and Timbertop reminiscing, he went on to suggest that what I was discovering in our letters might in fact be something significant — what he often tells parents they should aim for in their own families, in their own ways.

He called it a “gradual divergence.”

Places like Timbertop, in his view, don’t just provide important firsthand experiences with the outdoors. They also mark “the beginning of divergence from the path of the adults which needs to happen, which, in modern Western society, is increasingly difficult for children to achieve.”

He told me he often draws a diagram to help parents understand. I asked him to send a copy by email.

“I don’t have a scanner but it’s just as simple as shown here!” he wrote, attaching a photo. “The third one is the healthy one. The vertical lines indicate adolescence but of course it’s simplistic to imply that adolescence begins in such a measurable, almost abrupt way.”

What he was getting at — what I could see in his and Amelia’s own hand-drawn correspondence — suddenly became clear.

The letters to and fro are both a point of connection between us and our daughter and a way to push for the right amount of separation. They fill and expand the in-between. Letters written with the delays of snail mail in mind, if we’re lucky, let us develop a voice apart from others, with less (or no) attention to the pings and alerts of harried modern life.

In Amelia’s case, letters let her speak at her own pace, meandering in expression, sharing the trivial and private, sending away the stress, marking in ink the joys and messy uncertainties. They point to a certain kind of gift, but not like my wife and I had imagined.

Amelia’s experience involves not just the luxury of removal — the taking away of social media. It also includes an addition, something the letters capture and embody: the gift of agency. Far from home at 13, in a messed-up world, she has landed where there is intellectual space and the means to practice a method for asserting and exploring who she is and wants to become. She has found a room of one’s own.

I’m tempted to send her a letter detailing my discovery. Maybe this time, I’ll write it by hand. Better yet, maybe I’ll let her tell me what she thinks when she gets the urge.

Read by Damien Cave

Audio produced by Parin Behrooz .

Damien Cave is an international correspondent for The Times, covering the Indo-Pacific region. He is based in Sydney, Australia.  More about Damien Cave

The Great Read

Here are more fascinating tales you can’t help reading all the way to the end..

Deathbed Visions: Researchers are documenting deathbed visions , a phenomenon that seems to help the dying, as well as those they leave behind.

The Pants Pendulum: Around 2020, the “right” pants began to swing from skinny to wide. But is there even a consensus around trends anymore ?

The Psychic Peril of Mars: NASA is conducting tests on what might be the greatest challenge of a human mission to the red planet: the trauma of isolation .

Saved by a Rescue Dog: He spent 13 years addicted to cocaine. Running a shelter for abused and neglected dogs in New York has kept him sober, but it hasn’t been easy .

An Art Mogul's Fall: After a dramatic rise in business and society, Louise Blouin finds herself unloading a Hamptons dream home in bankruptcy court .

English Use arrow key to access related widget.

  • Customer Service
  • My USPS ›
  • Español

Top Searches

Alert: We are currently experiencing issues with some of our applications. We are working to resolve the issues and apologize for the inconvenience.

Alert: USPS.com is undergoing routine maintenance from 10 PM ET, Saturday, March 9 through 4 AM ET, Sunday, March 10, 2024. During this time, you may not be able to sign-in to your account and payment transactions on some applications may be temporarily unavailable. We apologize for any inconvenience.

Alert: Severe weather conditions across the U.S. may delay final delivery of your mail and packages. Read more ›

Alert: USPS.com is undergoing routine maintenance from 11 PM ET, Saturday, March 2 through 4 AM ET, March 3, 2024. During this time, payment transactions on some applications will be temporarily unavailable. We apologize for any inconvenience.

Alert: We are currently experiencing issues with some of our applications. We apologize for the inconvenience.

Alert: Payment transactions on some applications will be temporarily unavailable from 11 PM ET, Saturday, January 6 through 3 AM ET, Sunday, January 7, 2024. We apologize for any inconvenience.

Alert: Some of our applications are undergoing routine maintenance on Monday, October 30 from 10-11 PM ET and may be unavailable. We apologize for any inconvenience.

Alert: Hurricane Idalia is affecting USPS operations in the Southeast U.S. For updates, see our Service Alerts ›

Alert: Some of our applications are undergoing routine maintenance from Saturday, August 26 through Sunday, August 27 and may be unavailable. We apologize for any inconvenience.

Image of play button

Find out how to send mail. 1:53

Video Description: How to Send a Letter or Postcard (TXT 4 KB)

How to Send a Letter or Postcard: Domestic

Sending mail with USPS is easy! Our video will help you with most letters, cards, and postcards you send domestically (inside the U.S.), including U.S. territories and military bases in the U.S. and abroad.

For how to ship a package, see How to Send a Package: Domestic .

Send Mail: Step-by-Step Instructions

Envelope and postcard

Step 1: Choose Envelope or Postcard

Envelopes are for sending flat, flexible things, like letters, cards, checks, forms, and other paper goods. For just 1 $0.68 First-Class Mail ® Forever ® stamp , you can send 1 oz (about 4 sheets of regular, 8-1/2" x 11" paper in a rectangular envelope) to anywhere in the U.S.!

No. 10 envelope compared to the minimum and maximum envelope sizes

Envelopes must be rectangular and made of paper to qualify for letter prices. Your envelope can be a maximum of 11-1/2" long x 6-1/8" high. (A standard No. 10 envelope is 9-1/2" long x 4-1/8" high.) You can fold what you put in your envelope, but it needs to stay flat—no more than 1/4" thick.

If you want to send letter-sized papers without folding them, you can use a large envelope (called a "flat"); the postage for flats starts at $1.39 . If your large envelope is nonrectangular, rigid (can't bend), or lumpy (not uniformly thick), you'll have to pay the package price.

TIP: If your envelope can't fit through USPS mail processing machines, or is rigid, lumpy or has clasps, string, or buttons, it's "nonmachinable" and you'll have to pay $0.40 more to send it. ( See additional postage in Step 3 .) You'll also have to pay more if your envelopes are square or vertical (taller than they are wide).

Postcards are for short messages that you don't need to put in an envelope. Save money using a $0.53 postcard stamp to send a standard-sized postcard anywhere in the U.S. Standard postcards are usually made of paper, are between 5" to 6" long and 3-1/2" to 4-1/4" high, and are between 0.007" and 0.016" thick.

Envelope and postcard with return address written in the top left corner and delivery address in the bottom center.

Step 2: Address Your Mail

Envelopes: Write your address (the "return" or "sender" address) in the top left corner. Write the delivery address (the "recipient" address) in the bottom center.

Postcards: Postcards come in different formats, so write the delivery address in the space it gives you (on the same side you write your message and put the stamp).

Print your return address and the delivery address clearly, in the correct spots, to make sure your mail is delivered on time.

Address Format Tips

  • Use a pen or permanent marker.
  • Do not use commas or periods.
  • Include the ZIP+4 ® Code whenever possible.

Write Sender Address

Write your address (the "return address") in the top-left corner. Include the following on separate lines:

  • Your full name or company name
  • Apartment or suite number
  • Full street address
  • City, State, and ZIP+4 Code

Write Delivery Address

Write the delivery address (the "recipient" address) in the bottom center of the envelope. Include the following on separate lines:

  • Recipient's full name or company name

If the apartment or suite number cannot fit on the delivery address line above the city, state, and ZIP+4 Code, place it on a separate line immediately above the delivery address line.

Write the sender's address in the top-left corner. Include the following on separate lines:

  • Full street address and apartment or suite number, if applicable

Special U.S. Addresses

Puerto rico.

Some Puerto Rico addresses include an urbanization or community code for a specific area or development. Addresses with an urbanization code, abbreviated URB, should be written on 4 lines:

MS MARIA SUAREZ URB LAS GLADIOLAS 150 CALLE A SAN JUAN PR 00926-3232

More Puerto Rico Address Examples

U.S. Virgin Islands

Virgin Islands addresses have the same format as standard addresses. The right abbreviation for this territory is "VI," not "US VI" or "USA VI":

MS JOAN SMITH RR 1 BOX 6601 KINGSHILL VI 00850-9802

Military and Diplomatic Mail (APO/FPO/DPO)

Mail to military and diplomatic addresses is treated differently:

  • Do not include the city or country name when you send something to an APO/FDO/DPO address in another country. This keeps your mail out of foreign mail networks.
  • Do include unit and box numbers if they're assigned:

SEAMAN JOSEPH SMITH UNIT 100100 BOX 4120 FPO AP 96691

More Details on Military Addresses

When you're done addressing your envelope, put what you're sending inside the envelope, then close and seal it (using the envelope's glue or tape).

Envelope and postcard, each with a stamp in the upper right corner

Step 3: Calculate Postage (& Add Insurance or Extra Services)

A First-Class Mail ® Forever stamp costs $0.68 and goes in the upper right corner of the envelope. (You can also use any combination of stamps that adds up to $0.68.)

If your letter is heavier or bigger, or if you want to add insurance or extra services like Certified Mail ® service, you'll pay more.

A standard postcard stamp costs $0.53 . (Large or square postcards will cost more.) Put the postcard stamp in the space provided near the delivery address.

a paper for writing

Postage for letters mostly depends on weight and size/shape. You can weigh your letter with a kitchen scale, postal scale , at a self-service kiosk, or at the Post Office ™ counter.

TIP: As a rule of thumb, you can send 1 oz (4 sheets of printer paper and a business-sized envelope) for 1 First-Class Mail ® Forever ® stamp (currently $0.68).

The postage for a large envelope (or flat) starts at $1.39 for 1 oz.

Where Can I Buy Postage?

  • The Postal Store ® Shop online for all stamps and add-on postage for oversized or heavier envelopes.
  • Post Office Locations Buy stamps at Post Office locations , self-service kiosks , or at Approved Postal Providers ® such as grocery and drug stores.

TIP: If you're sending larger envelopes (flats) using Priority Mail ® or Priority Mail Express ® service, you can use Click-N-Ship ® service to pay for and print your own postage online.

Additional Postage

If your envelope weighs over 1 oz, you can buy additional postage in the amount you need:

  • Each additional 1 oz is $0.24, for letters up to 3.5 oz and large envelopes up to 13 oz.
  • Nonmachinable items, including envelopes that are lumpy or rigid, or have clasps, string, or buttons will cost $0.44 more to send. You'll also have to pay more if your envelopes are square or vertical (taller than they are wide).
  • You can also buy 1¢, 2¢, 3¢, 4¢, 5¢, and 10¢ stamps at The Postal Store .

TIP: Put the stamp on last; that way, if you make a mistake at any other point, you won't waste a stamp.

Calculate a Price

Add-On Services

If you want insurance, proof of delivery, signature services, or other optional services, you'll have to pay extra.

Our Insurance & Extra Services page has more details; some of the more common add-on services for letters include:

  • Certified Mail ® : Get proof that you mailed your item and that the recipient signed for it.
  • Registered Mail ® : USPS's most secure mail service–mail is processed manually, handled separately and securely, and signed for along every step of its journey. The recipient must sign for the mail to confirm delivery (or attempted delivery).
  • Return Receipt: You'll get a printed or emailed delivery record showing the recipient's signature. You can combine Return Receipt with other services, including Certified Mail, Registered Mail, Priority Mail Express ® service, and more.
  • Adult Signature Required: Only an adult (age 21+) can sign for the mail after showing a valid government ID .

Postage Options

There are several ways to get postage for your envelope.

  • The Postal Store ® --> ® and Priority Mail Express ® envelopes.
  • Post Office ™ Locations --> ® such as grocery and drug stores.

Send your letter or postcard from your mailbox, a blue collection box, or Post Office.

Step 4: Send Your Mail

Once your envelope or postcard has the correct addresses and postage, you can send it several ways, including putting it in your mailbox or dropping it in a blue collection box or at a Post Office ™ location.

Send your letter or postcard from your mailbox, a blue collection box, or Post Office.

  • Put your letter inside your mailbox and raise the flag (if you have one).
  • If you have a cluster mailbox, drop it in the outgoing mail slot.
  • Drop it off in a blue collection box.
  • Take it to a Post Office lobby drop.

Important Note: If your envelope has postage stamps and weighs more than 10 oz or is thicker than 1/2", you can't put it in a collection box; you have to give it to an employee at a Post Office location. See more details on What Can and Cannot be Deposited in a Collection Box?

Bonus: Sending Mail Pro Tips

The Postal Service uses high-speed sorting machines to help process and deliver 425.3 million mail pieces each day. Here are some extra tips to improve your mail sending experience:

  • Stay flexible : Don't send rigid (hard) objects in paper envelopes.
  • Sending embellished invitations (for weddings, graduations, etc.)? Get them hand-canceled or put them inside another envelope.
  • Need tracking? Learn about your options.

Flexible and flat items only (like paper or photos, less than 1/4 inch thick). Rigid or lumpy Items (like keys or flash drives) can tear your envelope.

Stay Flexible

Postcards, letter envelopes, and large envelopes (flats) all need to bend to fit through USPS ® high-speed sorting machines.

  • OK: Flexible, flat things like stickers, photos, trading cards, etc. should be okay—as long as your envelope stays flat, not lumpy, and less than 1/4" thick.
  • Not OK: Don't put rigid objects (like flash drives, coins, keys, hard plastic card cases, etc.) loose in unpadded paper envelopes: They could get torn out of the envelope, jam the sorting machines, cause a delay, or even get lost.

Instead, for rigid and odd-shaped objects (or things you don't want to get bent), we recommend using a padded envelope or small box and sending it as a package .

Sending Embellished Invitations (for Weddings, Graduations, etc.)

If you want to send a specially decorated envelope (like some wedding invitations):

  • You can pay the extra fee for nonmachinable First-Class Mail ® items, bring your mail to the Post Office™ counter, and ask the retail associate to hand-cancel your embellished invitations.
  • For externally decorated invitations: If you use wax seals, strings, ribbons, etc. on your envelopes, don't try to send them exposed. Instead, to make sure your envelopes arrive looking the way your designer intended, put them inside another envelope .

Need Tracking?

Tracking is not available for First-Class Mail items. If you'd like to get tracking information for your letter:

  • You can pay extra to send your letter using Priority Mail Express ® or Priority Mail ® service.
  • You can get delivery confirmation by adding Certified Mail ® or Registered Mail ® service. (You can even combine it with Return Receipt if you want the recipient's signature.)

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 March 2024

Predicting and improving complex beer flavor through machine learning

  • Michiel Schreurs   ORCID: orcid.org/0000-0002-9449-5619 1 , 2 , 3   na1 ,
  • Supinya Piampongsant 1 , 2 , 3   na1 ,
  • Miguel Roncoroni   ORCID: orcid.org/0000-0001-7461-1427 1 , 2 , 3   na1 ,
  • Lloyd Cool   ORCID: orcid.org/0000-0001-9936-3124 1 , 2 , 3 , 4 ,
  • Beatriz Herrera-Malaver   ORCID: orcid.org/0000-0002-5096-9974 1 , 2 , 3 ,
  • Christophe Vanderaa   ORCID: orcid.org/0000-0001-7443-5427 4 ,
  • Florian A. Theßeling 1 , 2 , 3 ,
  • Łukasz Kreft   ORCID: orcid.org/0000-0001-7620-4657 5 ,
  • Alexander Botzki   ORCID: orcid.org/0000-0001-6691-4233 5 ,
  • Philippe Malcorps 6 ,
  • Luk Daenen 6 ,
  • Tom Wenseleers   ORCID: orcid.org/0000-0002-1434-861X 4 &
  • Kevin J. Verstrepen   ORCID: orcid.org/0000-0002-3077-6219 1 , 2 , 3  

Nature Communications volume  15 , Article number:  2368 ( 2024 ) Cite this article

27k Accesses

720 Altmetric

Metrics details

  • Chemical engineering
  • Gas chromatography
  • Machine learning
  • Metabolomics
  • Taste receptors

The perception and appreciation of food flavor depends on many interacting chemical compounds and external factors, and therefore proves challenging to understand and predict. Here, we combine extensive chemical and sensory analyses of 250 different beers to train machine learning models that allow predicting flavor and consumer appreciation. For each beer, we measure over 200 chemical properties, perform quantitative descriptive sensory analysis with a trained tasting panel and map data from over 180,000 consumer reviews to train 10 different machine learning models. The best-performing algorithm, Gradient Boosting, yields models that significantly outperform predictions based on conventional statistics and accurately predict complex food features and consumer appreciation from chemical profiles. Model dissection allows identifying specific and unexpected compounds as drivers of beer flavor and appreciation. Adding these compounds results in variants of commercial alcoholic and non-alcoholic beers with improved consumer appreciation. Together, our study reveals how big data and machine learning uncover complex links between food chemistry, flavor and consumer perception, and lays the foundation to develop novel, tailored foods with superior flavors.

Similar content being viewed by others

a paper for writing

Highly accurate protein structure prediction with AlphaFold

John Jumper, Richard Evans, … Demis Hassabis

a paper for writing

The environmental price of fast fashion

Kirsi Niinimäki, Greg Peters, … Alison Gwilt

a paper for writing

Edible mycelium bioengineered for enhanced nutritional value and sensory appeal using a modular synthetic biology toolkit

Vayu Maini Rekdal, Casper R. B. van der Luijt, … Jay D. Keasling

Introduction

Predicting and understanding food perception and appreciation is one of the major challenges in food science. Accurate modeling of food flavor and appreciation could yield important opportunities for both producers and consumers, including quality control, product fingerprinting, counterfeit detection, spoilage detection, and the development of new products and product combinations (food pairing) 1 , 2 , 3 , 4 , 5 , 6 . Accurate models for flavor and consumer appreciation would contribute greatly to our scientific understanding of how humans perceive and appreciate flavor. Moreover, accurate predictive models would also facilitate and standardize existing food assessment methods and could supplement or replace assessments by trained and consumer tasting panels, which are variable, expensive and time-consuming 7 , 8 , 9 . Lastly, apart from providing objective, quantitative, accurate and contextual information that can help producers, models can also guide consumers in understanding their personal preferences 10 .

Despite the myriad of applications, predicting food flavor and appreciation from its chemical properties remains a largely elusive goal in sensory science, especially for complex food and beverages 11 , 12 . A key obstacle is the immense number of flavor-active chemicals underlying food flavor. Flavor compounds can vary widely in chemical structure and concentration, making them technically challenging and labor-intensive to quantify, even in the face of innovations in metabolomics, such as non-targeted metabolic fingerprinting 13 , 14 . Moreover, sensory analysis is perhaps even more complicated. Flavor perception is highly complex, resulting from hundreds of different molecules interacting at the physiochemical and sensorial level. Sensory perception is often non-linear, characterized by complex and concentration-dependent synergistic and antagonistic effects 15 , 16 , 17 , 18 , 19 , 20 , 21 that are further convoluted by the genetics, environment, culture and psychology of consumers 22 , 23 , 24 . Perceived flavor is therefore difficult to measure, with problems of sensitivity, accuracy, and reproducibility that can only be resolved by gathering sufficiently large datasets 25 . Trained tasting panels are considered the prime source of quality sensory data, but require meticulous training, are low throughput and high cost. Public databases containing consumer reviews of food products could provide a valuable alternative, especially for studying appreciation scores, which do not require formal training 25 . Public databases offer the advantage of amassing large amounts of data, increasing the statistical power to identify potential drivers of appreciation. However, public datasets suffer from biases, including a bias in the volunteers that contribute to the database, as well as confounding factors such as price, cult status and psychological conformity towards previous ratings of the product.

Classical multivariate statistics and machine learning methods have been used to predict flavor of specific compounds by, for example, linking structural properties of a compound to its potential biological activities or linking concentrations of specific compounds to sensory profiles 1 , 26 . Importantly, most previous studies focused on predicting organoleptic properties of single compounds (often based on their chemical structure) 27 , 28 , 29 , 30 , 31 , 32 , 33 , thus ignoring the fact that these compounds are present in a complex matrix in food or beverages and excluding complex interactions between compounds. Moreover, the classical statistics commonly used in sensory science 34 , 35 , 36 , 37 , 38 , 39 require a large sample size and sufficient variance amongst predictors to create accurate models. They are not fit for studying an extensive set of hundreds of interacting flavor compounds, since they are sensitive to outliers, have a high tendency to overfit and are less suited for non-linear and discontinuous relationships 40 .

In this study, we combine extensive chemical analyses and sensory data of a set of different commercial beers with machine learning approaches to develop models that predict taste, smell, mouthfeel and appreciation from compound concentrations. Beer is particularly suited to model the relationship between chemistry, flavor and appreciation. First, beer is a complex product, consisting of thousands of flavor compounds that partake in complex sensory interactions 41 , 42 , 43 . This chemical diversity arises from the raw materials (malt, yeast, hops, water and spices) and biochemical conversions during the brewing process (kilning, mashing, boiling, fermentation, maturation and aging) 44 , 45 . Second, the advent of the internet saw beer consumers embrace online review platforms, such as RateBeer (ZX Ventures, Anheuser-Busch InBev SA/NV) and BeerAdvocate (Next Glass, inc.). In this way, the beer community provides massive data sets of beer flavor and appreciation scores, creating extraordinarily large sensory databases to complement the analyses of our professional sensory panel. Specifically, we characterize over 200 chemical properties of 250 commercial beers, spread across 22 beer styles, and link these to the descriptive sensory profiling data of a 16-person in-house trained tasting panel and data acquired from over 180,000 public consumer reviews. These unique and extensive datasets enable us to train a suite of machine learning models to predict flavor and appreciation from a beer’s chemical profile. Dissection of the best-performing models allows us to pinpoint specific compounds as potential drivers of beer flavor and appreciation. Follow-up experiments confirm the importance of these compounds and ultimately allow us to significantly improve the flavor and appreciation of selected commercial beers. Together, our study represents a significant step towards understanding complex flavors and reinforces the value of machine learning to develop and refine complex foods. In this way, it represents a stepping stone for further computer-aided food engineering applications 46 .

To generate a comprehensive dataset on beer flavor, we selected 250 commercial Belgian beers across 22 different beer styles (Supplementary Fig.  S1 ). Beers with ≤ 4.2% alcohol by volume (ABV) were classified as non-alcoholic and low-alcoholic. Blonds and Tripels constitute a significant portion of the dataset (12.4% and 11.2%, respectively) reflecting their presence on the Belgian beer market and the heterogeneity of beers within these styles. By contrast, lager beers are less diverse and dominated by a handful of brands. Rare styles such as Brut or Faro make up only a small fraction of the dataset (2% and 1%, respectively) because fewer of these beers are produced and because they are dominated by distinct characteristics in terms of flavor and chemical composition.

Extensive analysis identifies relationships between chemical compounds in beer

For each beer, we measured 226 different chemical properties, including common brewing parameters such as alcohol content, iso-alpha acids, pH, sugar concentration 47 , and over 200 flavor compounds (Methods, Supplementary Table  S1 ). A large portion (37.2%) are terpenoids arising from hopping, responsible for herbal and fruity flavors 16 , 48 . A second major category are yeast metabolites, such as esters and alcohols, that result in fruity and solvent notes 48 , 49 , 50 . Other measured compounds are primarily derived from malt, or other microbes such as non- Saccharomyces yeasts and bacteria (‘wild flora’). Compounds that arise from spices or staling are labeled under ‘Others’. Five attributes (caloric value, total acids and total ester, hop aroma and sulfur compounds) are calculated from multiple individually measured compounds.

As a first step in identifying relationships between chemical properties, we determined correlations between the concentrations of the compounds (Fig.  1 , upper panel, Supplementary Data  1 and 2 , and Supplementary Fig.  S2 . For the sake of clarity, only a subset of the measured compounds is shown in Fig.  1 ). Compounds of the same origin typically show a positive correlation, while absence of correlation hints at parameters varying independently. For example, the hop aroma compounds citronellol, and alpha-terpineol show moderate correlations with each other (Spearman’s rho=0.39 and 0.57), but not with the bittering hop component iso-alpha acids (Spearman’s rho=0.16 and −0.07). This illustrates how brewers can independently modify hop aroma and bitterness by selecting hop varieties and dosage time. If hops are added early in the boiling phase, chemical conversions increase bitterness while aromas evaporate, conversely, late addition of hops preserves aroma but limits bitterness 51 . Similarly, hop-derived iso-alpha acids show a strong anti-correlation with lactic acid and acetic acid, likely reflecting growth inhibition of lactic acid and acetic acid bacteria, or the consequent use of fewer hops in sour beer styles, such as West Flanders ales and Fruit beers, that rely on these bacteria for their distinct flavors 52 . Finally, yeast-derived esters (ethyl acetate, ethyl decanoate, ethyl hexanoate, ethyl octanoate) and alcohols (ethanol, isoamyl alcohol, isobutanol, and glycerol), correlate with Spearman coefficients above 0.5, suggesting that these secondary metabolites are correlated with the yeast genetic background and/or fermentation parameters and may be difficult to influence individually, although the choice of yeast strain may offer some control 53 .

figure 1

Spearman rank correlations are shown. Descriptors are grouped according to their origin (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)), and sensory aspect (aroma, taste, palate, and overall appreciation). Please note that for the chemical compounds, for the sake of clarity, only a subset of the total number of measured compounds is shown, with an emphasis on the key compounds for each source. For more details, see the main text and Methods section. Chemical data can be found in Supplementary Data  1 , correlations between all chemical compounds are depicted in Supplementary Fig.  S2 and correlation values can be found in Supplementary Data  2 . See Supplementary Data  4 for sensory panel assessments and Supplementary Data  5 for correlation values between all sensory descriptors.

Interestingly, different beer styles show distinct patterns for some flavor compounds (Supplementary Fig.  S3 ). These observations agree with expectations for key beer styles, and serve as a control for our measurements. For instance, Stouts generally show high values for color (darker), while hoppy beers contain elevated levels of iso-alpha acids, compounds associated with bitter hop taste. Acetic and lactic acid are not prevalent in most beers, with notable exceptions such as Kriek, Lambic, Faro, West Flanders ales and Flanders Old Brown, which use acid-producing bacteria ( Lactobacillus and Pediococcus ) or unconventional yeast ( Brettanomyces ) 54 , 55 . Glycerol, ethanol and esters show similar distributions across all beer styles, reflecting their common origin as products of yeast metabolism during fermentation 45 , 53 . Finally, low/no-alcohol beers contain low concentrations of glycerol and esters. This is in line with the production process for most of the low/no-alcohol beers in our dataset, which are produced through limiting fermentation or by stripping away alcohol via evaporation or dialysis, with both methods having the unintended side-effect of reducing the amount of flavor compounds in the final beer 56 , 57 .

Besides expected associations, our data also reveals less trivial associations between beer styles and specific parameters. For example, geraniol and citronellol, two monoterpenoids responsible for citrus, floral and rose flavors and characteristic of Citra hops, are found in relatively high amounts in Christmas, Saison, and Brett/co-fermented beers, where they may originate from terpenoid-rich spices such as coriander seeds instead of hops 58 .

Tasting panel assessments reveal sensorial relationships in beer

To assess the sensory profile of each beer, a trained tasting panel evaluated each of the 250 beers for 50 sensory attributes, including different hop, malt and yeast flavors, off-flavors and spices. Panelists used a tasting sheet (Supplementary Data  3 ) to score the different attributes. Panel consistency was evaluated by repeating 12 samples across different sessions and performing ANOVA. In 95% of cases no significant difference was found across sessions ( p  > 0.05), indicating good panel consistency (Supplementary Table  S2 ).

Aroma and taste perception reported by the trained panel are often linked (Fig.  1 , bottom left panel and Supplementary Data  4 and 5 ), with high correlations between hops aroma and taste (Spearman’s rho=0.83). Bitter taste was found to correlate with hop aroma and taste in general (Spearman’s rho=0.80 and 0.69), and particularly with “grassy” noble hops (Spearman’s rho=0.75). Barnyard flavor, most often associated with sour beers, is identified together with stale hops (Spearman’s rho=0.97) that are used in these beers. Lactic and acetic acid, which often co-occur, are correlated (Spearman’s rho=0.66). Interestingly, sweetness and bitterness are anti-correlated (Spearman’s rho = −0.48), confirming the hypothesis that they mask each other 59 , 60 . Beer body is highly correlated with alcohol (Spearman’s rho = 0.79), and overall appreciation is found to correlate with multiple aspects that describe beer mouthfeel (alcohol, carbonation; Spearman’s rho= 0.32, 0.39), as well as with hop and ester aroma intensity (Spearman’s rho=0.39 and 0.35).

Similar to the chemical analyses, sensorial analyses confirmed typical features of specific beer styles (Supplementary Fig.  S4 ). For example, sour beers (Faro, Flanders Old Brown, Fruit beer, Kriek, Lambic, West Flanders ale) were rated acidic, with flavors of both acetic and lactic acid. Hoppy beers were found to be bitter and showed hop-associated aromas like citrus and tropical fruit. Malt taste is most detected among scotch, stout/porters, and strong ales, while low/no-alcohol beers, which often have a reputation for being ‘worty’ (reminiscent of unfermented, sweet malt extract) appear in the middle. Unsurprisingly, hop aromas are most strongly detected among hoppy beers. Like its chemical counterpart (Supplementary Fig.  S3 ), acidity shows a right-skewed distribution, with the most acidic beers being Krieks, Lambics, and West Flanders ales.

Tasting panel assessments of specific flavors correlate with chemical composition

We find that the concentrations of several chemical compounds strongly correlate with specific aroma or taste, as evaluated by the tasting panel (Fig.  2 , Supplementary Fig.  S5 , Supplementary Data  6 ). In some cases, these correlations confirm expectations and serve as a useful control for data quality. For example, iso-alpha acids, the bittering compounds in hops, strongly correlate with bitterness (Spearman’s rho=0.68), while ethanol and glycerol correlate with tasters’ perceptions of alcohol and body, the mouthfeel sensation of fullness (Spearman’s rho=0.82/0.62 and 0.72/0.57 respectively) and darker color from roasted malts is a good indication of malt perception (Spearman’s rho=0.54).

figure 2

Heatmap colors indicate Spearman’s Rho. Axes are organized according to sensory categories (aroma, taste, mouthfeel, overall), chemical categories and chemical sources in beer (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)). See Supplementary Data  6 for all correlation values.

Interestingly, for some relationships between chemical compounds and perceived flavor, correlations are weaker than expected. For example, the rose-smelling phenethyl acetate only weakly correlates with floral aroma. This hints at more complex relationships and interactions between compounds and suggests a need for a more complex model than simple correlations. Lastly, we uncovered unexpected correlations. For instance, the esters ethyl decanoate and ethyl octanoate appear to correlate slightly with hop perception and bitterness, possibly due to their fruity flavor. Iron is anti-correlated with hop aromas and bitterness, most likely because it is also anti-correlated with iso-alpha acids. This could be a sign of metal chelation of hop acids 61 , given that our analyses measure unbound hop acids and total iron content, or could result from the higher iron content in dark and Fruit beers, which typically have less hoppy and bitter flavors 62 .

Public consumer reviews complement expert panel data

To complement and expand the sensory data of our trained tasting panel, we collected 180,000 reviews of our 250 beers from the online consumer review platform RateBeer. This provided numerical scores for beer appearance, aroma, taste, palate, overall quality as well as the average overall score.

Public datasets are known to suffer from biases, such as price, cult status and psychological conformity towards previous ratings of a product. For example, prices correlate with appreciation scores for these online consumer reviews (rho=0.49, Supplementary Fig.  S6 ), but not for our trained tasting panel (rho=0.19). This suggests that prices affect consumer appreciation, which has been reported in wine 63 , while blind tastings are unaffected. Moreover, we observe that some beer styles, like lagers and non-alcoholic beers, generally receive lower scores, reflecting that online reviewers are mostly beer aficionados with a preference for specialty beers over lager beers. In general, we find a modest correlation between our trained panel’s overall appreciation score and the online consumer appreciation scores (Fig.  3 , rho=0.29). Apart from the aforementioned biases in the online datasets, serving temperature, sample freshness and surroundings, which are all tightly controlled during the tasting panel sessions, can vary tremendously across online consumers and can further contribute to (among others, appreciation) differences between the two categories of tasters. Importantly, in contrast to the overall appreciation scores, for many sensory aspects the results from the professional panel correlated well with results obtained from RateBeer reviews. Correlations were highest for features that are relatively easy to recognize even for untrained tasters, like bitterness, sweetness, alcohol and malt aroma (Fig.  3 and below).

figure 3

RateBeer text mining results can be found in Supplementary Data  7 . Rho values shown are Spearman correlation values, with asterisks indicating significant correlations ( p  < 0.05, two-sided). All p values were smaller than 0.001, except for Esters aroma (0.0553), Esters taste (0.3275), Esters aroma—banana (0.0019), Coriander (0.0508) and Diacetyl (0.0134).

Besides collecting consumer appreciation from these online reviews, we developed automated text analysis tools to gather additional data from review texts (Supplementary Data  7 ). Processing review texts on the RateBeer database yielded comparable results to the scores given by the trained panel for many common sensory aspects, including acidity, bitterness, sweetness, alcohol, malt, and hop tastes (Fig.  3 ). This is in line with what would be expected, since these attributes require less training for accurate assessment and are less influenced by environmental factors such as temperature, serving glass and odors in the environment. Consumer reviews also correlate well with our trained panel for 4-vinyl guaiacol, a compound associated with a very characteristic aroma. By contrast, correlations for more specific aromas like ester, coriander or diacetyl are underrepresented in the online reviews, underscoring the importance of using a trained tasting panel and standardized tasting sheets with explicit factors to be scored for evaluating specific aspects of a beer. Taken together, our results suggest that public reviews are trustworthy for some, but not all, flavor features and can complement or substitute taste panel data for these sensory aspects.

Models can predict beer sensory profiles from chemical data

The rich datasets of chemical analyses, tasting panel assessments and public reviews gathered in the first part of this study provided us with a unique opportunity to develop predictive models that link chemical data to sensorial features. Given the complexity of beer flavor, basic statistical tools such as correlations or linear regression may not always be the most suitable for making accurate predictions. Instead, we applied different machine learning models that can model both simple linear and complex interactive relationships. Specifically, we constructed a set of regression models to predict (a) trained panel scores for beer flavor and quality and (b) public reviews’ appreciation scores from beer chemical profiles. We trained and tested 10 different models (Methods), 3 linear regression-based models (simple linear regression with first-order interactions (LR), lasso regression with first-order interactions (Lasso), partial least squares regressor (PLSR)), 5 decision tree models (AdaBoost regressor (ABR), extra trees (ET), gradient boosting regressor (GBR), random forest (RF) and XGBoost regressor (XGBR)), 1 support vector regression (SVR), and 1 artificial neural network (ANN) model.

To compare the performance of our machine learning models, the dataset was randomly split into a training and test set, stratified by beer style. After a model was trained on data in the training set, its performance was evaluated on its ability to predict the test dataset obtained from multi-output models (based on the coefficient of determination, see Methods). Additionally, individual-attribute models were ranked per descriptor and the average rank was calculated, as proposed by Korneva et al. 64 . Importantly, both ways of evaluating the models’ performance agreed in general. Performance of the different models varied (Table  1 ). It should be noted that all models perform better at predicting RateBeer results than results from our trained tasting panel. One reason could be that sensory data is inherently variable, and this variability is averaged out with the large number of public reviews from RateBeer. Additionally, all tree-based models perform better at predicting taste than aroma. Linear models (LR) performed particularly poorly, with negative R 2 values, due to severe overfitting (training set R 2  = 1). Overfitting is a common issue in linear models with many parameters and limited samples, especially with interaction terms further amplifying the number of parameters. L1 regularization (Lasso) successfully overcomes this overfitting, out-competing multiple tree-based models on the RateBeer dataset. Similarly, the dimensionality reduction of PLSR avoids overfitting and improves performance, to some extent. Still, tree-based models (ABR, ET, GBR, RF and XGBR) show the best performance, out-competing the linear models (LR, Lasso, PLSR) commonly used in sensory science 65 .

GBR models showed the best overall performance in predicting sensory responses from chemical information, with R 2 values up to 0.75 depending on the predicted sensory feature (Supplementary Table  S4 ). The GBR models predict consumer appreciation (RateBeer) better than our trained panel’s appreciation (R 2 value of 0.67 compared to R 2 value of 0.09) (Supplementary Table  S3 and Supplementary Table  S4 ). ANN models showed intermediate performance, likely because neural networks typically perform best with larger datasets 66 . The SVR shows intermediate performance, mostly due to the weak predictions of specific attributes that lower the overall performance (Supplementary Table  S4 ).

Model dissection identifies specific, unexpected compounds as drivers of consumer appreciation

Next, we leveraged our models to infer important contributors to sensory perception and consumer appreciation. Consumer preference is a crucial sensory aspects, because a product that shows low consumer appreciation scores often does not succeed commercially 25 . Additionally, the requirement for a large number of representative evaluators makes consumer trials one of the more costly and time-consuming aspects of product development. Hence, a model for predicting chemical drivers of overall appreciation would be a welcome addition to the available toolbox for food development and optimization.

Since GBR models on our RateBeer dataset showed the best overall performance, we focused on these models. Specifically, we used two approaches to identify important contributors. First, rankings of the most important predictors for each sensorial trait in the GBR models were obtained based on impurity-based feature importance (mean decrease in impurity). High-ranked parameters were hypothesized to be either the true causal chemical properties underlying the trait, to correlate with the actual causal properties, or to take part in sensory interactions affecting the trait 67 (Fig.  4A ). In a second approach, we used SHAP 68 to determine which parameters contributed most to the model for making predictions of consumer appreciation (Fig.  4B ). SHAP calculates parameter contributions to model predictions on a per-sample basis, which can be aggregated into an importance score.

figure 4

A The impurity-based feature importance (mean deviance in impurity, MDI) calculated from the Gradient Boosting Regression (GBR) model predicting RateBeer appreciation scores. The top 15 highest ranked chemical properties are shown. B SHAP summary plot for the top 15 parameters contributing to our GBR model. Each point on the graph represents a sample from our dataset. The color represents the concentration of that parameter, with bluer colors representing low values and redder colors representing higher values. Greater absolute values on the horizontal axis indicate a higher impact of the parameter on the prediction of the model. C Spearman correlations between the 15 most important chemical properties and consumer overall appreciation. Numbers indicate the Spearman Rho correlation coefficient, and the rank of this correlation compared to all other correlations. The top 15 important compounds were determined using SHAP (panel B).

Both approaches identified ethyl acetate as the most predictive parameter for beer appreciation (Fig.  4 ). Ethyl acetate is the most abundant ester in beer with a typical ‘fruity’, ‘solvent’ and ‘alcoholic’ flavor, but is often considered less important than other esters like isoamyl acetate. The second most important parameter identified by SHAP is ethanol, the most abundant beer compound after water. Apart from directly contributing to beer flavor and mouthfeel, ethanol drastically influences the physical properties of beer, dictating how easily volatile compounds escape the beer matrix to contribute to beer aroma 69 . Importantly, it should also be noted that the importance of ethanol for appreciation is likely inflated by the very low appreciation scores of non-alcoholic beers (Supplementary Fig.  S4 ). Despite not often being considered a driver of beer appreciation, protein level also ranks highly in both approaches, possibly due to its effect on mouthfeel and body 70 . Lactic acid, which contributes to the tart taste of sour beers, is the fourth most important parameter identified by SHAP, possibly due to the generally high appreciation of sour beers in our dataset.

Interestingly, some of the most important predictive parameters for our model are not well-established as beer flavors or are even commonly regarded as being negative for beer quality. For example, our models identify methanethiol and ethyl phenyl acetate, an ester commonly linked to beer staling 71 , as a key factor contributing to beer appreciation. Although there is no doubt that high concentrations of these compounds are considered unpleasant, the positive effects of modest concentrations are not yet known 72 , 73 .

To compare our approach to conventional statistics, we evaluated how well the 15 most important SHAP-derived parameters correlate with consumer appreciation (Fig.  4C ). Interestingly, only 6 of the properties derived by SHAP rank amongst the top 15 most correlated parameters. For some chemical compounds, the correlations are so low that they would have likely been considered unimportant. For example, lactic acid, the fourth most important parameter, shows a bimodal distribution for appreciation, with sour beers forming a separate cluster, that is missed entirely by the Spearman correlation. Additionally, the correlation plots reveal outliers, emphasizing the need for robust analysis tools. Together, this highlights the need for alternative models, like the Gradient Boosting model, that better grasp the complexity of (beer) flavor.

Finally, to observe the relationships between these chemical properties and their predicted targets, partial dependence plots were constructed for the six most important predictors of consumer appreciation 74 , 75 , 76 (Supplementary Fig.  S7 ). One-way partial dependence plots show how a change in concentration affects the predicted appreciation. These plots reveal an important limitation of our models: appreciation predictions remain constant at ever-increasing concentrations. This implies that once a threshold concentration is reached, further increasing the concentration does not affect appreciation. This is false, as it is well-documented that certain compounds become unpleasant at high concentrations, including ethyl acetate (‘nail polish’) 77 and methanethiol (‘sulfury’ and ‘rotten cabbage’) 78 . The inability of our models to grasp that flavor compounds have optimal levels, above which they become negative, is a consequence of working with commercial beer brands where (off-)flavors are rarely too high to negatively impact the product. The two-way partial dependence plots show how changing the concentration of two compounds influences predicted appreciation, visualizing their interactions (Supplementary Fig.  S7 ). In our case, the top 5 parameters are dominated by additive or synergistic interactions, with high concentrations for both compounds resulting in the highest predicted appreciation.

To assess the robustness of our best-performing models and model predictions, we performed 100 iterations of the GBR, RF and ET models. In general, all iterations of the models yielded similar performance (Supplementary Fig.  S8 ). Moreover, the main predictors (including the top predictors ethanol and ethyl acetate) remained virtually the same, especially for GBR and RF. For the iterations of the ET model, we did observe more variation in the top predictors, which is likely a consequence of the model’s inherent random architecture in combination with co-correlations between certain predictors. However, even in this case, several of the top predictors (ethanol and ethyl acetate) remain unchanged, although their rank in importance changes (Supplementary Fig.  S8 ).

Next, we investigated if a combination of RateBeer and trained panel data into one consolidated dataset would lead to stronger models, under the hypothesis that such a model would suffer less from bias in the datasets. A GBR model was trained to predict appreciation on the combined dataset. This model underperformed compared to the RateBeer model, both in the native case and when including a dataset identifier (R 2  = 0.67, 0.26 and 0.42 respectively). For the latter, the dataset identifier is the most important feature (Supplementary Fig.  S9 ), while most of the feature importance remains unchanged, with ethyl acetate and ethanol ranking highest, like in the original model trained only on RateBeer data. It seems that the large variation in the panel dataset introduces noise, weakening the models’ performances and reliability. In addition, it seems reasonable to assume that both datasets are fundamentally different, with the panel dataset obtained by blind tastings by a trained professional panel.

Lastly, we evaluated whether beer style identifiers would further enhance the model’s performance. A GBR model was trained with parameters that explicitly encoded the styles of the samples. This did not improve model performance (R2 = 0.66 with style information vs R2 = 0.67). The most important chemical features are consistent with the model trained without style information (eg. ethanol and ethyl acetate), and with the exception of the most preferred (strong ale) and least preferred (low/no-alcohol) styles, none of the styles were among the most important features (Supplementary Fig.  S9 , Supplementary Table  S5 and S6 ). This is likely due to a combination of style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original models, as well as the low number of samples belonging to some styles, making it difficult for the model to learn style-specific patterns. Moreover, beer styles are not rigorously defined, with some styles overlapping in features and some beers being misattributed to a specific style, all of which leads to more noise in models that use style parameters.

Model validation

To test if our predictive models give insight into beer appreciation, we set up experiments aimed at improving existing commercial beers. We specifically selected overall appreciation as the trait to be examined because of its complexity and commercial relevance. Beer flavor comprises a complex bouquet rather than single aromas and tastes 53 . Hence, adding a single compound to the extent that a difference is noticeable may lead to an unbalanced, artificial flavor. Therefore, we evaluated the effect of combinations of compounds. Because Blond beers represent the most extensive style in our dataset, we selected a beer from this style as the starting material for these experiments (Beer 64 in Supplementary Data  1 ).

In the first set of experiments, we adjusted the concentrations of compounds that made up the most important predictors of overall appreciation (ethyl acetate, ethanol, lactic acid, ethyl phenyl acetate) together with correlated compounds (ethyl hexanoate, isoamyl acetate, glycerol), bringing them up to 95 th percentile ethanol-normalized concentrations (Methods) within the Blond group (‘Spiked’ concentration in Fig.  5A ). Compared to controls, the spiked beers were found to have significantly improved overall appreciation among trained panelists, with panelist noting increased intensity of ester flavors, sweetness, alcohol, and body fullness (Fig.  5B ). To disentangle the contribution of ethanol to these results, a second experiment was performed without the addition of ethanol. This resulted in a similar outcome, including increased perception of alcohol and overall appreciation.

figure 5

Adding the top chemical compounds, identified as best predictors of appreciation by our model, into poorly appreciated beers results in increased appreciation from our trained panel. Results of sensory tests between base beers and those spiked with compounds identified as the best predictors by the model. A Blond and Non/Low-alcohol (0.0% ABV) base beers were brought up to 95th-percentile ethanol-normalized concentrations within each style. B For each sensory attribute, tasters indicated the more intense sample and selected the sample they preferred. The numbers above the bars correspond to the p values that indicate significant changes in perceived flavor (two-sided binomial test: alpha 0.05, n  = 20 or 13).

In a last experiment, we tested whether using the model’s predictions can boost the appreciation of a non-alcoholic beer (beer 223 in Supplementary Data  1 ). Again, the addition of a mixture of predicted compounds (omitting ethanol, in this case) resulted in a significant increase in appreciation, body, ester flavor and sweetness.

Predicting flavor and consumer appreciation from chemical composition is one of the ultimate goals of sensory science. A reliable, systematic and unbiased way to link chemical profiles to flavor and food appreciation would be a significant asset to the food and beverage industry. Such tools would substantially aid in quality control and recipe development, offer an efficient and cost-effective alternative to pilot studies and consumer trials and would ultimately allow food manufacturers to produce superior, tailor-made products that better meet the demands of specific consumer groups more efficiently.

A limited set of studies have previously tried, to varying degrees of success, to predict beer flavor and beer popularity based on (a limited set of) chemical compounds and flavors 79 , 80 . Current sensitive, high-throughput technologies allow measuring an unprecedented number of chemical compounds and properties in a large set of samples, yielding a dataset that can train models that help close the gaps between chemistry and flavor, even for a complex natural product like beer. To our knowledge, no previous research gathered data at this scale (250 samples, 226 chemical parameters, 50 sensory attributes and 5 consumer scores) to disentangle and validate the chemical aspects driving beer preference using various machine-learning techniques. We find that modern machine learning models outperform conventional statistical tools, such as correlations and linear models, and can successfully predict flavor appreciation from chemical composition. This could be attributed to the natural incorporation of interactions and non-linear or discontinuous effects in machine learning models, which are not easily grasped by the linear model architecture. While linear models and partial least squares regression represent the most widespread statistical approaches in sensory science, in part because they allow interpretation 65 , 81 , 82 , modern machine learning methods allow for building better predictive models while preserving the possibility to dissect and exploit the underlying patterns. Of the 10 different models we trained, tree-based models, such as our best performing GBR, showed the best overall performance in predicting sensory responses from chemical information, outcompeting artificial neural networks. This agrees with previous reports for models trained on tabular data 83 . Our results are in line with the findings of Colantonio et al. who also identified the gradient boosting architecture as performing best at predicting appreciation and flavor (of tomatoes and blueberries, in their specific study) 26 . Importantly, besides our larger experimental scale, we were able to directly confirm our models’ predictions in vivo.

Our study confirms that flavor compound concentration does not always correlate with perception, suggesting complex interactions that are often missed by more conventional statistics and simple models. Specifically, we find that tree-based algorithms may perform best in developing models that link complex food chemistry with aroma. Furthermore, we show that massive datasets of untrained consumer reviews provide a valuable source of data, that can complement or even replace trained tasting panels, especially for appreciation and basic flavors, such as sweetness and bitterness. This holds despite biases that are known to occur in such datasets, such as price or conformity bias. Moreover, GBR models predict taste better than aroma. This is likely because taste (e.g. bitterness) often directly relates to the corresponding chemical measurements (e.g., iso-alpha acids), whereas such a link is less clear for aromas, which often result from the interplay between multiple volatile compounds. We also find that our models are best at predicting acidity and alcohol, likely because there is a direct relation between the measured chemical compounds (acids and ethanol) and the corresponding perceived sensorial attribute (acidity and alcohol), and because even untrained consumers are generally able to recognize these flavors and aromas.

The predictions of our final models, trained on review data, hold even for blind tastings with small groups of trained tasters, as demonstrated by our ability to validate specific compounds as drivers of beer flavor and appreciation. Since adding a single compound to the extent of a noticeable difference may result in an unbalanced flavor profile, we specifically tested our identified key drivers as a combination of compounds. While this approach does not allow us to validate if a particular single compound would affect flavor and/or appreciation, our experiments do show that this combination of compounds increases consumer appreciation.

It is important to stress that, while it represents an important step forward, our approach still has several major limitations. A key weakness of the GBR model architecture is that amongst co-correlating variables, the largest main effect is consistently preferred for model building. As a result, co-correlating variables often have artificially low importance scores, both for impurity and SHAP-based methods, like we observed in the comparison to the more randomized Extra Trees models. This implies that chemicals identified as key drivers of a specific sensory feature by GBR might not be the true causative compounds, but rather co-correlate with the actual causative chemical. For example, the high importance of ethyl acetate could be (partially) attributed to the total ester content, ethanol or ethyl hexanoate (rho=0.77, rho=0.72 and rho=0.68), while ethyl phenylacetate could hide the importance of prenyl isobutyrate and ethyl benzoate (rho=0.77 and rho=0.76). Expanding our GBR model to include beer style as a parameter did not yield additional power or insight. This is likely due to style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original model, as well as the smaller sample size per style, limiting the power to uncover style-specific patterns. This can be partly attributed to the curse of dimensionality, where the high number of parameters results in the models mainly incorporating single parameter effects, rather than complex interactions such as style-dependent effects 67 . A larger number of samples may overcome some of these limitations and offer more insight into style-specific effects. On the other hand, beer style is not a rigid scientific classification, and beers within one style often differ a lot, which further complicates the analysis of style as a model factor.

Our study is limited to beers from Belgian breweries. Although these beers cover a large portion of the beer styles available globally, some beer styles and consumer patterns may be missing, while other features might be overrepresented. For example, many Belgian ales exhibit yeast-driven flavor profiles, which is reflected in the chemical drivers of appreciation discovered by this study. In future work, expanding the scope to include diverse markets and beer styles could lead to the identification of even more drivers of appreciation and better models for special niche products that were not present in our beer set.

In addition to inherent limitations of GBR models, there are also some limitations associated with studying food aroma. Even if our chemical analyses measured most of the known aroma compounds, the total number of flavor compounds in complex foods like beer is still larger than the subset we were able to measure in this study. For example, hop-derived thiols, that influence flavor at very low concentrations, are notoriously difficult to measure in a high-throughput experiment. Moreover, consumer perception remains subjective and prone to biases that are difficult to avoid. It is also important to stress that the models are still immature and that more extensive datasets will be crucial for developing more complete models in the future. Besides more samples and parameters, our dataset does not include any demographic information about the tasters. Including such data could lead to better models that grasp external factors like age and culture. Another limitation is that our set of beers consists of high-quality end-products and lacks beers that are unfit for sale, which limits the current model in accurately predicting products that are appreciated very badly. Finally, while models could be readily applied in quality control, their use in sensory science and product development is restrained by their inability to discern causal relationships. Given that the models cannot distinguish compounds that genuinely drive consumer perception from those that merely correlate, validation experiments are essential to identify true causative compounds.

Despite the inherent limitations, dissection of our models enabled us to pinpoint specific molecules as potential drivers of beer aroma and consumer appreciation, including compounds that were unexpected and would not have been identified using standard approaches. Important drivers of beer appreciation uncovered by our models include protein levels, ethyl acetate, ethyl phenyl acetate and lactic acid. Currently, many brewers already use lactic acid to acidify their brewing water and ensure optimal pH for enzymatic activity during the mashing process. Our results suggest that adding lactic acid can also improve beer appreciation, although its individual effect remains to be tested. Interestingly, ethanol appears to be unnecessary to improve beer appreciation, both for blond beer and alcohol-free beer. Given the growing consumer interest in alcohol-free beer, with a predicted annual market growth of >7% 84 , it is relevant for brewers to know what compounds can further increase consumer appreciation of these beers. Hence, our model may readily provide avenues to further improve the flavor and consumer appreciation of both alcoholic and non-alcoholic beers, which is generally considered one of the key challenges for future beer production.

Whereas we see a direct implementation of our results for the development of superior alcohol-free beverages and other food products, our study can also serve as a stepping stone for the development of novel alcohol-containing beverages. We want to echo the growing body of scientific evidence for the negative effects of alcohol consumption, both on the individual level by the mutagenic, teratogenic and carcinogenic effects of ethanol 85 , 86 , as well as the burden on society caused by alcohol abuse and addiction. We encourage the use of our results for the production of healthier, tastier products, including novel and improved beverages with lower alcohol contents. Furthermore, we strongly discourage the use of these technologies to improve the appreciation or addictive properties of harmful substances.

The present work demonstrates that despite some important remaining hurdles, combining the latest developments in chemical analyses, sensory analysis and modern machine learning methods offers exciting avenues for food chemistry and engineering. Soon, these tools may provide solutions in quality control and recipe development, as well as new approaches to sensory science and flavor research.

Beer selection

250 commercial Belgian beers were selected to cover the broad diversity of beer styles and corresponding diversity in chemical composition and aroma. See Supplementary Fig.  S1 .

Chemical dataset

Sample preparation.

Beers within their expiration date were purchased from commercial retailers. Samples were prepared in biological duplicates at room temperature, unless explicitly stated otherwise. Bottle pressure was measured with a manual pressure device (Steinfurth Mess-Systeme GmbH) and used to calculate CO 2 concentration. The beer was poured through two filter papers (Macherey-Nagel, 500713032 MN 713 ¼) to remove carbon dioxide and prevent spontaneous foaming. Samples were then prepared for measurements by targeted Headspace-Gas Chromatography-Flame Ionization Detector/Flame Photometric Detector (HS-GC-FID/FPD), Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS), colorimetric analysis, enzymatic analysis, Near-Infrared (NIR) analysis, as described in the sections below. The mean values of biological duplicates are reported for each compound.

HS-GC-FID/FPD

HS-GC-FID/FPD (Shimadzu GC 2010 Plus) was used to measure higher alcohols, acetaldehyde, esters, 4-vinyl guaicol, and sulfur compounds. Each measurement comprised 5 ml of sample pipetted into a 20 ml glass vial containing 1.75 g NaCl (VWR, 27810.295). 100 µl of 2-heptanol (Sigma-Aldrich, H3003) (internal standard) solution in ethanol (Fisher Chemical, E/0650DF/C17) was added for a final concentration of 2.44 mg/L. Samples were flushed with nitrogen for 10 s, sealed with a silicone septum, stored at −80 °C and analyzed in batches of 20.

The GC was equipped with a DB-WAXetr column (length, 30 m; internal diameter, 0.32 mm; layer thickness, 0.50 µm; Agilent Technologies, Santa Clara, CA, USA) to the FID and an HP-5 column (length, 30 m; internal diameter, 0.25 mm; layer thickness, 0.25 µm; Agilent Technologies, Santa Clara, CA, USA) to the FPD. N 2 was used as the carrier gas. Samples were incubated for 20 min at 70 °C in the headspace autosampler (Flow rate, 35 cm/s; Injection volume, 1000 µL; Injection mode, split; Combi PAL autosampler, CTC analytics, Switzerland). The injector, FID and FPD temperatures were kept at 250 °C. The GC oven temperature was first held at 50 °C for 5 min and then allowed to rise to 80 °C at a rate of 5 °C/min, followed by a second ramp of 4 °C/min until 200 °C kept for 3 min and a final ramp of (4 °C/min) until 230 °C for 1 min. Results were analyzed with the GCSolution software version 2.4 (Shimadzu, Kyoto, Japan). The GC was calibrated with a 5% EtOH solution (VWR International) containing the volatiles under study (Supplementary Table  S7 ).

HS-SPME-GC-MS

HS-SPME-GC-MS (Shimadzu GCMS-QP-2010 Ultra) was used to measure additional volatile compounds, mainly comprising terpenoids and esters. Samples were analyzed by HS-SPME using a triphase DVB/Carboxen/PDMS 50/30 μm SPME fiber (Supelco Co., Bellefonte, PA, USA) followed by gas chromatography (Thermo Fisher Scientific Trace 1300 series, USA) coupled to a mass spectrometer (Thermo Fisher Scientific ISQ series MS) equipped with a TriPlus RSH autosampler. 5 ml of degassed beer sample was placed in 20 ml vials containing 1.75 g NaCl (VWR, 27810.295). 5 µl internal standard mix was added, containing 2-heptanol (1 g/L) (Sigma-Aldrich, H3003), 4-fluorobenzaldehyde (1 g/L) (Sigma-Aldrich, 128376), 2,3-hexanedione (1 g/L) (Sigma-Aldrich, 144169) and guaiacol (1 g/L) (Sigma-Aldrich, W253200) in ethanol (Fisher Chemical, E/0650DF/C17). Each sample was incubated at 60 °C in the autosampler oven with constant agitation. After 5 min equilibration, the SPME fiber was exposed to the sample headspace for 30 min. The compounds trapped on the fiber were thermally desorbed in the injection port of the chromatograph by heating the fiber for 15 min at 270 °C.

The GC-MS was equipped with a low polarity RXi-5Sil MS column (length, 20 m; internal diameter, 0.18 mm; layer thickness, 0.18 µm; Restek, Bellefonte, PA, USA). Injection was performed in splitless mode at 320 °C, a split flow of 9 ml/min, a purge flow of 5 ml/min and an open valve time of 3 min. To obtain a pulsed injection, a programmed gas flow was used whereby the helium gas flow was set at 2.7 mL/min for 0.1 min, followed by a decrease in flow of 20 ml/min to the normal 0.9 mL/min. The temperature was first held at 30 °C for 3 min and then allowed to rise to 80 °C at a rate of 7 °C/min, followed by a second ramp of 2 °C/min till 125 °C and a final ramp of 8 °C/min with a final temperature of 270 °C.

Mass acquisition range was 33 to 550 amu at a scan rate of 5 scans/s. Electron impact ionization energy was 70 eV. The interface and ion source were kept at 275 °C and 250 °C, respectively. A mix of linear n-alkanes (from C7 to C40, Supelco Co.) was injected into the GC-MS under identical conditions to serve as external retention index markers. Identification and quantification of the compounds were performed using an in-house developed R script as described in Goelen et al. and Reher et al. 87 , 88 (for package information, see Supplementary Table  S8 ). Briefly, chromatograms were analyzed using AMDIS (v2.71) 89 to separate overlapping peaks and obtain pure compound spectra. The NIST MS Search software (v2.0 g) in combination with the NIST2017, FFNSC3 and Adams4 libraries were used to manually identify the empirical spectra, taking into account the expected retention time. After background subtraction and correcting for retention time shifts between samples run on different days based on alkane ladders, compound elution profiles were extracted and integrated using a file with 284 target compounds of interest, which were either recovered in our identified AMDIS list of spectra or were known to occur in beer. Compound elution profiles were estimated for every peak in every chromatogram over a time-restricted window using weighted non-negative least square analysis after which peak areas were integrated 87 , 88 . Batch effect correction was performed by normalizing against the most stable internal standard compound, 4-fluorobenzaldehyde. Out of all 284 target compounds that were analyzed, 167 were visually judged to have reliable elution profiles and were used for final analysis.

Discrete photometric and enzymatic analysis

Discrete photometric and enzymatic analysis (Thermo Scientific TM Gallery TM Plus Beermaster Discrete Analyzer) was used to measure acetic acid, ammonia, beta-glucan, iso-alpha acids, color, sugars, glycerol, iron, pH, protein, and sulfite. 2 ml of sample volume was used for the analyses. Information regarding the reagents and standard solutions used for analyses and calibrations is included in Supplementary Table  S7 and Supplementary Table  S9 .

NIR analyses

NIR analysis (Anton Paar Alcolyzer Beer ME System) was used to measure ethanol. Measurements comprised 50 ml of sample, and a 10% EtOH solution was used for calibration.

Correlation calculations

Pairwise Spearman Rank correlations were calculated between all chemical properties.

Sensory dataset

Trained panel.

Our trained tasting panel consisted of volunteers who gave prior verbal informed consent. All compounds used for the validation experiment were of food-grade quality. The tasting sessions were approved by the Social and Societal Ethics Committee of the KU Leuven (G-2022-5677-R2(MAR)). All online reviewers agreed to the Terms and Conditions of the RateBeer website.

Sensory analysis was performed according to the American Society of Brewing Chemists (ASBC) Sensory Analysis Methods 90 . 30 volunteers were screened through a series of triangle tests. The sixteen most sensitive and consistent tasters were retained as taste panel members. The resulting panel was diverse in age [22–42, mean: 29], sex [56% male] and nationality [7 different countries]. The panel developed a consensus vocabulary to describe beer aroma, taste and mouthfeel. Panelists were trained to identify and score 50 different attributes, using a 7-point scale to rate attributes’ intensity. The scoring sheet is included as Supplementary Data  3 . Sensory assessments took place between 10–12 a.m. The beers were served in black-colored glasses. Per session, between 5 and 12 beers of the same style were tasted at 12 °C to 16 °C. Two reference beers were added to each set and indicated as ‘Reference 1 & 2’, allowing panel members to calibrate their ratings. Not all panelists were present at every tasting. Scores were scaled by standard deviation and mean-centered per taster. Values are represented as z-scores and clustered by Euclidean distance. Pairwise Spearman correlations were calculated between taste and aroma sensory attributes. Panel consistency was evaluated by repeating samples on different sessions and performing ANOVA to identify differences, using the ‘stats’ package (v4.2.2) in R (for package information, see Supplementary Table  S8 ).

Online reviews from a public database

The ‘scrapy’ package in Python (v3.6) (for package information, see Supplementary Table  S8 ). was used to collect 232,288 online reviews (mean=922, min=6, max=5343) from RateBeer, an online beer review database. Each review entry comprised 5 numerical scores (appearance, aroma, taste, palate and overall quality) and an optional review text. The total number of reviews per reviewer was collected separately. Numerical scores were scaled and centered per rater, and mean scores were calculated per beer.

For the review texts, the language was estimated using the packages ‘langdetect’ and ‘langid’ in Python. Reviews that were classified as English by both packages were kept. Reviewers with fewer than 100 entries overall were discarded. 181,025 reviews from >6000 reviewers from >40 countries remained. Text processing was done using the ‘nltk’ package in Python. Texts were corrected for slang and misspellings; proper nouns and rare words that are relevant to the beer context were specified and kept as-is (‘Chimay’,’Lambic’, etc.). A dictionary of semantically similar sensorial terms, for example ‘floral’ and ‘flower’, was created and collapsed together into one term. Words were stemmed and lemmatized to avoid identifying words such as ‘acid’ and ‘acidity’ as separate terms. Numbers and punctuation were removed.

Sentences from up to 50 randomly chosen reviews per beer were manually categorized according to the aspect of beer they describe (appearance, aroma, taste, palate, overall quality—not to be confused with the 5 numerical scores described above) or flagged as irrelevant if they contained no useful information. If a beer contained fewer than 50 reviews, all reviews were manually classified. This labeled data set was used to train a model that classified the rest of the sentences for all beers 91 . Sentences describing taste and aroma were extracted, and term frequency–inverse document frequency (TFIDF) was implemented to calculate enrichment scores for sensorial words per beer.

The sex of the tasting subject was not considered when building our sensory database. Instead, results from different panelists were averaged, both for our trained panel (56% male, 44% female) and the RateBeer reviews (70% male, 30% female for RateBeer as a whole).

Beer price collection and processing

Beer prices were collected from the following stores: Colruyt, Delhaize, Total Wine, BeerHawk, The Belgian Beer Shop, The Belgian Shop, and Beer of Belgium. Where applicable, prices were converted to Euros and normalized per liter. Spearman correlations were calculated between these prices and mean overall appreciation scores from RateBeer and the taste panel, respectively.

Pairwise Spearman Rank correlations were calculated between all sensory properties.

Machine learning models

Predictive modeling of sensory profiles from chemical data.

Regression models were constructed to predict (a) trained panel scores for beer flavors and quality from beer chemical profiles and (b) public reviews’ appreciation scores from beer chemical profiles. Z-scores were used to represent sensory attributes in both data sets. Chemical properties with log-normal distributions (Shapiro-Wilk test, p  <  0.05 ) were log-transformed. Missing chemical measurements (0.1% of all data) were replaced with mean values per attribute. Observations from 250 beers were randomly separated into a training set (70%, 175 beers) and a test set (30%, 75 beers), stratified per beer style. Chemical measurements (p = 231) were normalized based on the training set average and standard deviation. In total, three linear regression-based models: linear regression with first-order interaction terms (LR), lasso regression with first-order interaction terms (Lasso) and partial least squares regression (PLSR); five decision tree models, Adaboost regressor (ABR), Extra Trees (ET), Gradient Boosting regressor (GBR), Random Forest (RF) and XGBoost regressor (XGBR); one support vector machine model (SVR) and one artificial neural network model (ANN) were trained. The models were implemented using the ‘scikit-learn’ package (v1.2.2) and ‘xgboost’ package (v1.7.3) in Python (v3.9.16). Models were trained, and hyperparameters optimized, using five-fold cross-validated grid search with the coefficient of determination (R 2 ) as the evaluation metric. The ANN (scikit-learn’s MLPRegressor) was optimized using Bayesian Tree-Structured Parzen Estimator optimization with the ‘Optuna’ Python package (v3.2.0). Individual models were trained per attribute, and a multi-output model was trained on all attributes simultaneously.

Model dissection

GBR was found to outperform other methods, resulting in models with the highest average R 2 values in both trained panel and public review data sets. Impurity-based rankings of the most important predictors for each predicted sensorial trait were obtained using the ‘scikit-learn’ package. To observe the relationships between these chemical properties and their predicted targets, partial dependence plots (PDP) were constructed for the six most important predictors of consumer appreciation 74 , 75 .

The ‘SHAP’ package in Python (v0.41.0) was implemented to provide an alternative ranking of predictor importance and to visualize the predictors’ effects as a function of their concentration 68 .

Validation of causal chemical properties

To validate the effects of the most important model features on predicted sensory attributes, beers were spiked with the chemical compounds identified by the models and descriptive sensory analyses were carried out according to the American Society of Brewing Chemists (ASBC) protocol 90 .

Compound spiking was done 30 min before tasting. Compounds were spiked into fresh beer bottles, that were immediately resealed and inverted three times. Fresh bottles of beer were opened for the same duration, resealed, and inverted thrice, to serve as controls. Pairs of spiked samples and controls were served simultaneously, chilled and in dark glasses as outlined in the Trained panel section above. Tasters were instructed to select the glass with the higher flavor intensity for each attribute (directional difference test 92 ) and to select the glass they prefer.

The final concentration after spiking was equal to the within-style average, after normalizing by ethanol concentration. This was done to ensure balanced flavor profiles in the final spiked beer. The same methods were applied to improve a non-alcoholic beer. Compounds were the following: ethyl acetate (Merck KGaA, W241415), ethyl hexanoate (Merck KGaA, W243906), isoamyl acetate (Merck KGaA, W205508), phenethyl acetate (Merck KGaA, W285706), ethanol (96%, Colruyt), glycerol (Merck KGaA, W252506), lactic acid (Merck KGaA, 261106).

Significant differences in preference or perceived intensity were determined by performing the two-sided binomial test on each attribute.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The data that support the findings of this work are available in the Supplementary Data files and have been deposited to Zenodo under accession code 10653704 93 . The RateBeer scores data are under restricted access, they are not publicly available as they are property of RateBeer (ZX Ventures, USA). Access can be obtained from the authors upon reasonable request and with permission of RateBeer (ZX Ventures, USA).  Source data are provided with this paper.

Code availability

The code for training the machine learning models, analyzing the models, and generating the figures has been deposited to Zenodo under accession code 10653704 93 .

Tieman, D. et al. A chemical genetic roadmap to improved tomato flavor. Science 355 , 391–394 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Plutowska, B. & Wardencki, W. Application of gas chromatography–olfactometry (GC–O) in analysis and quality assessment of alcoholic beverages – A review. Food Chem. 107 , 449–463 (2008).

Article   CAS   Google Scholar  

Legin, A., Rudnitskaya, A., Seleznev, B. & Vlasov, Y. Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie. Anal. Chim. Acta 534 , 129–135 (2005).

Loutfi, A., Coradeschi, S., Mani, G. K., Shankar, P. & Rayappan, J. B. B. Electronic noses for food quality: A review. J. Food Eng. 144 , 103–111 (2015).

Ahn, Y.-Y., Ahnert, S. E., Bagrow, J. P. & Barabási, A.-L. Flavor network and the principles of food pairing. Sci. Rep. 1 , 196 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bartoshuk, L. M. & Klee, H. J. Better fruits and vegetables through sensory analysis. Curr. Biol. 23 , R374–R378 (2013).

Article   CAS   PubMed   Google Scholar  

Piggott, J. R. Design questions in sensory and consumer science. Food Qual. Prefer. 3293 , 217–220 (1995).

Article   Google Scholar  

Kermit, M. & Lengard, V. Assessing the performance of a sensory panel-panellist monitoring and tracking. J. Chemom. 19 , 154–161 (2005).

Cook, D. J., Hollowood, T. A., Linforth, R. S. T. & Taylor, A. J. Correlating instrumental measurements of texture and flavour release with human perception. Int. J. Food Sci. Technol. 40 , 631–641 (2005).

Chinchanachokchai, S., Thontirawong, P. & Chinchanachokchai, P. A tale of two recommender systems: The moderating role of consumer expertise on artificial intelligence based product recommendations. J. Retail. Consum. Serv. 61 , 1–12 (2021).

Ross, C. F. Sensory science at the human-machine interface. Trends Food Sci. Technol. 20 , 63–72 (2009).

Chambers, E. IV & Koppel, K. Associations of volatile compounds with sensory aroma and flavor: The complex nature of flavor. Molecules 18 , 4887–4905 (2013).

Pinu, F. R. Metabolomics—The new frontier in food safety and quality research. Food Res. Int. 72 , 80–81 (2015).

Danezis, G. P., Tsagkaris, A. S., Brusic, V. & Georgiou, C. A. Food authentication: state of the art and prospects. Curr. Opin. Food Sci. 10 , 22–31 (2016).

Shepherd, G. M. Smell images and the flavour system in the human brain. Nature 444 , 316–321 (2006).

Meilgaard, M. C. Prediction of flavor differences between beers from their chemical composition. J. Agric. Food Chem. 30 , 1009–1017 (1982).

Xu, L. et al. Widespread receptor-driven modulation in peripheral olfactory coding. Science 368 , eaaz5390 (2020).

Kupferschmidt, K. Following the flavor. Science 340 , 808–809 (2013).

Billesbølle, C. B. et al. Structural basis of odorant recognition by a human odorant receptor. Nature 615 , 742–749 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Smith, B. Perspective: Complexities of flavour. Nature 486 , S6–S6 (2012).

Pfister, P. et al. Odorant receptor inhibition is fundamental to odor encoding. Curr. Biol. 30 , 2574–2587 (2020).

Moskowitz, H. W., Kumaraiah, V., Sharma, K. N., Jacobs, H. L. & Sharma, S. D. Cross-cultural differences in simple taste preferences. Science 190 , 1217–1218 (1975).

Eriksson, N. et al. A genetic variant near olfactory receptor genes influences cilantro preference. Flavour 1 , 22 (2012).

Ferdenzi, C. et al. Variability of affective responses to odors: Culture, gender, and olfactory knowledge. Chem. Senses 38 , 175–186 (2013).

Article   PubMed   Google Scholar  

Lawless, H. T. & Heymann, H. Sensory evaluation of food: Principles and practices. (Springer, New York, NY). https://doi.org/10.1007/978-1-4419-6488-5 (2010).

Colantonio, V. et al. Metabolomic selection for enhanced fruit flavor. Proc. Natl. Acad. Sci. 119 , e2115865119 (2022).

Fritz, F., Preissner, R. & Banerjee, P. VirtualTaste: a web server for the prediction of organoleptic properties of chemical compounds. Nucleic Acids Res 49 , W679–W684 (2021).

Tuwani, R., Wadhwa, S. & Bagler, G. BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules. Sci. Rep. 9 , 1–13 (2019).

Dagan-Wiener, A. et al. Bitter or not? BitterPredict, a tool for predicting taste from chemical structure. Sci. Rep. 7 , 1–13 (2017).

Pallante, L. et al. Toward a general and interpretable umami taste predictor using a multi-objective machine learning approach. Sci. Rep. 12 , 1–11 (2022).

Malavolta, M. et al. A survey on computational taste predictors. Eur. Food Res. Technol. 248 , 2215–2235 (2022).

Lee, B. K. et al. A principal odor map unifies diverse tasks in olfactory perception. Science 381 , 999–1006 (2023).

Mayhew, E. J. et al. Transport features predict if a molecule is odorous. Proc. Natl. Acad. Sci. 119 , e2116576119 (2022).

Niu, Y. et al. Sensory evaluation of the synergism among ester odorants in light aroma-type liquor by odor threshold, aroma intensity and flash GC electronic nose. Food Res. Int. 113 , 102–114 (2018).

Yu, P., Low, M. Y. & Zhou, W. Design of experiments and regression modelling in food flavour and sensory analysis: A review. Trends Food Sci. Technol. 71 , 202–215 (2018).

Oladokun, O. et al. The impact of hop bitter acid and polyphenol profiles on the perceived bitterness of beer. Food Chem. 205 , 212–220 (2016).

Linforth, R., Cabannes, M., Hewson, L., Yang, N. & Taylor, A. Effect of fat content on flavor delivery during consumption: An in vivo model. J. Agric. Food Chem. 58 , 6905–6911 (2010).

Guo, S., Na Jom, K. & Ge, Y. Influence of roasting condition on flavor profile of sunflower seeds: A flavoromics approach. Sci. Rep. 9 , 11295 (2019).

Ren, Q. et al. The changes of microbial community and flavor compound in the fermentation process of Chinese rice wine using Fagopyrum tataricum grain as feedstock. Sci. Rep. 9 , 3365 (2019).

Hastie, T., Friedman, J. & Tibshirani, R. The Elements of Statistical Learning. (Springer, New York, NY). https://doi.org/10.1007/978-0-387-21606-5 (2001).

Dietz, C., Cook, D., Huismann, M., Wilson, C. & Ford, R. The multisensory perception of hop essential oil: a review. J. Inst. Brew. 126 , 320–342 (2020).

CAS   Google Scholar  

Roncoroni, Miguel & Verstrepen, Kevin Joan. Belgian Beer: Tested and Tasted. (Lannoo, 2018).

Meilgaard, M. Flavor chemistry of beer: Part II: Flavor and threshold of 239 aroma volatiles. in (1975).

Bokulich, N. A. & Bamforth, C. W. The microbiology of malting and brewing. Microbiol. Mol. Biol. Rev. MMBR 77 , 157–172 (2013).

Dzialo, M. C., Park, R., Steensels, J., Lievens, B. & Verstrepen, K. J. Physiology, ecology and industrial applications of aroma formation in yeast. FEMS Microbiol. Rev. 41 , S95–S128 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Datta, A. et al. Computer-aided food engineering. Nat. Food 3 , 894–904 (2022).

American Society of Brewing Chemists. Beer Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A.).

Olaniran, A. O., Hiralal, L., Mokoena, M. P. & Pillay, B. Flavour-active volatile compounds in beer: production, regulation and control. J. Inst. Brew. 123 , 13–23 (2017).

Verstrepen, K. J. et al. Flavor-active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Meilgaard, M. C. Flavour chemistry of beer. part I: flavour interaction between principal volatiles. Master Brew. Assoc. Am. Tech. Q 12 , 107–117 (1975).

Briggs, D. E., Boulton, C. A., Brookes, P. A. & Stevens, R. Brewing 227–254. (Woodhead Publishing). https://doi.org/10.1533/9781855739062.227 (2004).

Bossaert, S., Crauwels, S., De Rouck, G. & Lievens, B. The power of sour - A review: Old traditions, new opportunities. BrewingScience 72 , 78–88 (2019).

Google Scholar  

Verstrepen, K. J. et al. Flavor active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Snauwaert, I. et al. Microbial diversity and metabolite composition of Belgian red-brown acidic ales. Int. J. Food Microbiol. 221 , 1–11 (2016).

Spitaels, F. et al. The microbial diversity of traditional spontaneously fermented lambic beer. PLoS ONE 9 , e95384 (2014).

Blanco, C. A., Andrés-Iglesias, C. & Montero, O. Low-alcohol Beers: Flavor Compounds, Defects, and Improvement Strategies. Crit. Rev. Food Sci. Nutr. 56 , 1379–1388 (2016).

Jackowski, M. & Trusek, A. Non-Alcohol. beer Prod. – Overv. 20 , 32–38 (2018).

Takoi, K. et al. The contribution of geraniol metabolism to the citrus flavour of beer: Synergy of geraniol and β-citronellol under coexistence with excess linalool. J. Inst. Brew. 116 , 251–260 (2010).

Kroeze, J. H. & Bartoshuk, L. M. Bitterness suppression as revealed by split-tongue taste stimulation in humans. Physiol. Behav. 35 , 779–783 (1985).

Mennella, J. A. et al. A spoonful of sugar helps the medicine go down”: Bitter masking bysucrose among children and adults. Chem. Senses 40 , 17–25 (2015).

Wietstock, P., Kunz, T., Perreira, F. & Methner, F.-J. Metal chelation behavior of hop acids in buffered model systems. BrewingScience 69 , 56–63 (2016).

Sancho, D., Blanco, C. A., Caballero, I. & Pascual, A. Free iron in pale, dark and alcohol-free commercial lager beers. J. Sci. Food Agric. 91 , 1142–1147 (2011).

Rodrigues, H. & Parr, W. V. Contribution of cross-cultural studies to understanding wine appreciation: A review. Food Res. Int. 115 , 251–258 (2019).

Korneva, E. & Blockeel, H. Towards better evaluation of multi-target regression models. in ECML PKDD 2020 Workshops (eds. Koprinska, I. et al.) 353–362 (Springer International Publishing, Cham, 2020). https://doi.org/10.1007/978-3-030-65965-3_23 .

Gastón Ares. Mathematical and Statistical Methods in Food Science and Technology. (Wiley, 2013).

Grinsztajn, L., Oyallon, E. & Varoquaux, G. Why do tree-based models still outperform deep learning on tabular data? Preprint at http://arxiv.org/abs/2207.08815 (2022).

Gries, S. T. Statistics for Linguistics with R: A Practical Introduction. in Statistics for Linguistics with R (De Gruyter Mouton, 2021). https://doi.org/10.1515/9783110718256 .

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2 , 56–67 (2020).

Ickes, C. M. & Cadwallader, K. R. Effects of ethanol on flavor perception in alcoholic beverages. Chemosens. Percept. 10 , 119–134 (2017).

Kato, M. et al. Influence of high molecular weight polypeptides on the mouthfeel of commercial beer. J. Inst. Brew. 127 , 27–40 (2021).

Wauters, R. et al. Novel Saccharomyces cerevisiae variants slow down the accumulation of staling aldehydes and improve beer shelf-life. Food Chem. 398 , 1–11 (2023).

Li, H., Jia, S. & Zhang, W. Rapid determination of low-level sulfur compounds in beer by headspace gas chromatography with a pulsed flame photometric detector. J. Am. Soc. Brew. Chem. 66 , 188–191 (2008).

Dercksen, A., Laurens, J., Torline, P., Axcell, B. C. & Rohwer, E. Quantitative analysis of volatile sulfur compounds in beer using a membrane extraction interface. J. Am. Soc. Brew. Chem. 54 , 228–233 (1996).

Molnar, C. Interpretable Machine Learning: A Guide for Making Black-Box Models Interpretable. (2020).

Zhao, Q. & Hastie, T. Causal interpretations of black-box models. J. Bus. Econ. Stat. Publ. Am. Stat. Assoc. 39 , 272–281 (2019).

Article   MathSciNet   Google Scholar  

Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning. (Springer, 2019).

Labrado, D. et al. Identification by NMR of key compounds present in beer distillates and residual phases after dealcoholization by vacuum distillation. J. Sci. Food Agric. 100 , 3971–3978 (2020).

Lusk, L. T., Kay, S. B., Porubcan, A. & Ryder, D. S. Key olfactory cues for beer oxidation. J. Am. Soc. Brew. Chem. 70 , 257–261 (2012).

Gonzalez Viejo, C., Torrico, D. D., Dunshea, F. R. & Fuentes, S. Development of artificial neural network models to assess beer acceptability based on sensory properties using a robotic pourer: A comparative model approach to achieve an artificial intelligence system. Beverages 5 , 33 (2019).

Gonzalez Viejo, C., Fuentes, S., Torrico, D. D., Godbole, A. & Dunshea, F. R. Chemical characterization of aromas in beer and their effect on consumers liking. Food Chem. 293 , 479–485 (2019).

Gilbert, J. L. et al. Identifying breeding priorities for blueberry flavor using biochemical, sensory, and genotype by environment analyses. PLOS ONE 10 , 1–21 (2015).

Goulet, C. et al. Role of an esterase in flavor volatile variation within the tomato clade. Proc. Natl. Acad. Sci. 109 , 19009–19014 (2012).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Borisov, V. et al. Deep Neural Networks and Tabular Data: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 1–21 https://doi.org/10.1109/TNNLS.2022.3229161 (2022).

Statista. Statista Consumer Market Outlook: Beer - Worldwide.

Seitz, H. K. & Stickel, F. Molecular mechanisms of alcoholmediated carcinogenesis. Nat. Rev. Cancer 7 , 599–612 (2007).

Voordeckers, K. et al. Ethanol exposure increases mutation rate through error-prone polymerases. Nat. Commun. 11 , 3664 (2020).

Goelen, T. et al. Bacterial phylogeny predicts volatile organic compound composition and olfactory response of an aphid parasitoid. Oikos 129 , 1415–1428 (2020).

Article   ADS   Google Scholar  

Reher, T. et al. Evaluation of hop (Humulus lupulus) as a repellent for the management of Drosophila suzukii. Crop Prot. 124 , 104839 (2019).

Stein, S. E. An integrated method for spectrum extraction and compound identification from gas chromatography/mass spectrometry data. J. Am. Soc. Mass Spectrom. 10 , 770–781 (1999).

American Society of Brewing Chemists. Sensory Analysis Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A., 1992).

McAuley, J., Leskovec, J. & Jurafsky, D. Learning Attitudes and Attributes from Multi-Aspect Reviews. Preprint at https://doi.org/10.48550/arXiv.1210.3926 (2012).

Meilgaard, M. C., Carr, B. T. & Carr, B. T. Sensory Evaluation Techniques. (CRC Press, Boca Raton). https://doi.org/10.1201/b16452 (2014).

Schreurs, M. et al. Data from: Predicting and improving complex beer flavor through machine learning. Zenodo https://doi.org/10.5281/zenodo.10653704 (2024).

Download references

Acknowledgements

We thank all lab members for their discussions and thank all tasting panel members for their contributions. Special thanks go out to Dr. Karin Voordeckers for her tremendous help in proofreading and improving the manuscript. M.S. was supported by a Baillet-Latour fellowship, L.C. acknowledges financial support from KU Leuven (C16/17/006), F.A.T. was supported by a PhD fellowship from FWO (1S08821N). Research in the lab of K.J.V. is supported by KU Leuven, FWO, VIB, VLAIO and the Brewing Science Serves Health Fund. Research in the lab of T.W. is supported by FWO (G.0A51.15) and KU Leuven (C16/17/006).

Author information

These authors contributed equally: Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni.

Authors and Affiliations

VIB—KU Leuven Center for Microbiology, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni, Lloyd Cool, Beatriz Herrera-Malaver, Florian A. Theßeling & Kevin J. Verstrepen

CMPG Laboratory of Genetics and Genomics, KU Leuven, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Leuven Institute for Beer Research (LIBR), Gaston Geenslaan 1, B-3001, Leuven, Belgium

Laboratory of Socioecology and Social Evolution, KU Leuven, Naamsestraat 59, B-3000, Leuven, Belgium

Lloyd Cool, Christophe Vanderaa & Tom Wenseleers

VIB Bioinformatics Core, VIB, Rijvisschestraat 120, B-9052, Ghent, Belgium

Łukasz Kreft & Alexander Botzki

AB InBev SA/NV, Brouwerijplein 1, B-3000, Leuven, Belgium

Philippe Malcorps & Luk Daenen

You can also search for this author in PubMed   Google Scholar

Contributions

S.P., M.S. and K.J.V. conceived the experiments. S.P., M.S. and K.J.V. designed the experiments. S.P., M.S., M.R., B.H. and F.A.T. performed the experiments. S.P., M.S., L.C., C.V., L.K., A.B., P.M., L.D., T.W. and K.J.V. contributed analysis ideas. S.P., M.S., L.C., C.V., T.W. and K.J.V. analyzed the data. All authors contributed to writing the manuscript.

Corresponding author

Correspondence to Kevin J. Verstrepen .

Ethics declarations

Competing interests.

K.J.V. is affiliated with bar.on. The other authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Florian Bauer, Andrew John Macintosh and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary data 1, supplementary data 2, supplementary data 3, supplementary data 4, supplementary data 5, supplementary data 6, supplementary data 7, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schreurs, M., Piampongsant, S., Roncoroni, M. et al. Predicting and improving complex beer flavor through machine learning. Nat Commun 15 , 2368 (2024). https://doi.org/10.1038/s41467-024-46346-0

Download citation

Received : 30 October 2023

Accepted : 21 February 2024

Published : 26 March 2024

DOI : https://doi.org/10.1038/s41467-024-46346-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

a paper for writing

Monterey Herald

Sponsored Content | Best Essay Writing Services: Legit Paper…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to print (Opens in new window)
  • Advertise with Us

Sponsored Content

Sponsored Content | Best Essay Writing Services: Legit Paper Writing Websites in 2024

a paper for writing

Sponsored Content | Best Kratom Brands to Buy (2024) Ranking the Top Kratom Vendor Products for Sale

We're going to talk about the best Delta 8 brands that are worth checking out. We'll look at what makes a brand good, like how pure their products are, how strong they are, and if people like using them.

Sponsored Content | Best Delta 8 Brands of 2024: Top 7 THC Brands in the Cannabis Industry

Buy Reddit Upvotes

Sponsored Content | 5 Best Sites to Buy Reddit Upvotes

Buy Quora Followers

Sponsored Content | 5 Best Sites to Buy Quora Followers (Real & Cheap)

to submit an obituary

Please email your obituary to [email protected] and include your name, mailing address, phone number and either the name & phone number of the funeral home or a copy of the death certificate. If you have questions, we can be reached at 530-896-7718.

Sponsored Content | Best Essay Writing Services Worth Considering…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to print (Opens in new window)
  • Advertise with us
  • Public Notices
  • Local Guide
  • Real Estate
  • Today’s Ads
  • Special Sections

Sponsored Content

Sponsored content | best essay writing services worth considering in 2024 – the best paper writers online.

a paper for writing

Sponsored Content | 5 Best Sites To Buy Quora Upvotes

Buy Pinterest Followers

Sponsored Content | 5 Best Sites To Buy Pinterest Followers

Buy OnlyFans Likes

Sponsored Content | 5 Best Sites To Buy OnlyFans Likes

In this article, we'll unveil the top 5 delta 8 disposable vape pen brands recognized for their quality and performance.

Sponsored Content | Best Delta 8 Disposable Vape: Top 5 Vape Pens for Regular Consumption

Your path to academic success

Improve your paper with our award-winning Proofreading Services ,  Plagiarism Checker , Citation Generator , AI Detector & Knowledge Base .

Thesis proofreading service

Proofreading & Editing

Get expert help from Scribbr’s academic editors, who will proofread and edit your essay, paper, or dissertation to perfection.

Plagiarism checker

Plagiarism Checker

Detect and resolve unintentional plagiarism with the Scribbr Plagiarism Checker, so you can submit your paper with confidence.

APA Citation Generator

Citation Generator

Generate accurate citations with Scribbr’s free citation generator and save hours of repetitive work.

a paper for writing

Happy to help you

You’re not alone. Together with our team and highly qualified editors , we help you answer all your questions about academic writing.

Open 24/7 – 365 days a year. Always available to help you.

Very satisfied students

This is our reason for working. We want to make  all students  happy, every day.

Very good bajajsvajajwnnss

Very helpful

Very helpful, constructive comments!

Easy, fast and elegant too

A really great experience with Scribbr - I needed to get a second proofreader to finish proofing my PhD thesis due to the illness of my initial editor. I was a bit nervous about sending off only two disconnected sections of the document, but the system allowed me to a) choose what pages to submit (so I actually hadn't needed to prep a special partial document), and b) give some context to the editor. It was all very clear. Neshika did a fantastic job, and even delivered a little early on a rush-job deadline. Her comments were professional but human, with a clear sense of her personality coming through and she explained the principles behind changes very clearly. Her edits not only improved clarity, but were elegantly worded too. Worth every penny. Thank you!

based on the study

great experience they helped me with my…

great experience they helped me with my paper and lost my cat

I like it way better than…

I like it way better than citationmachine.net on citation machine you cant even create a citation without something popping up then you click out of it and you lose everything you put in it to create the citation. and something popping up, not even a second after you click out of it to go back to creating the citation.

Trusted. This is my 7th manuscript submission to Scribbr and they helped me a lot in improving the quality of my English. The editor also provides several tips that need to be considered in order to clarify sentences. Happy working with scribbr.

Thanks to the editor

Alexandra edited my sloppy text with great attention. She propose how to clarify a lot of vague places. She made me valuable notes to think about sense and language of my text. Her help significantly improved my text, not only in terms of language, but also in its sence, so I thank her very much.

Outstanding job!

The individual who proofread my paper provided great feedback. I am very grateful for the amount of time spent reviewing and critiquing the document. I would love to use him again. Thanks so much.

A Surprisingly Personalized Touch: Beyond Expectations with Doug

I was nervous about utilizing an online service because I assumed the edits would be stiff and not aligned with my tone and writing style. However, the edits were insightful and very much aligned with my style. I would recommend this service to everyone! My editor, Doug, was exceptional AND I received it 3 days early! Thank you!

nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn

It smells good

Very clear instruction on what I'm…

Very clear instruction on what I'm supposed to do in the document.

Thank You! You did a great job!

Fantastic service

Fantastic service. My essay was proofread and the structure edited to perfection. It left me with an essay that flowed well with no duplication. Thanks Brandy

This employee deserves a raise!!!

An essay for a scholarship I am applying for got flagged as 100% AI generated, after I had spent hours perfecting it. I had to use the help option, and the employee that helped me was great. Their name on the website was Willemijn, and they helped me all the way through, and explained that AI detectors, even those used by prestigious universities, are not 100% reliable. I was able to get the number down to 65%. I thank the employee, who I don't think realized how young I am, being that I am only 16. They were very understanding and polite, and they deserve to be rewarded somehow. Thank you, Willemijn, from a stressed out high school girl.

Great experience.

The turnaround on my paper was fast and efficient. I was pleased with the quality of the work and the professionalism.

Once again they did not disappoint! 24 hour timeline, got it done perfectly! Cannot say enough good things about them! I have used scribbr a few times and I wouldn’t hesitate to use them again! Thank you!

The feedback was relevant and clear

The feedback was relevant and very clear. We followed almost every proposal.

Good job like always

Everything you need to write an A-grade paper

Free resources used by 5,000,000 students every month.

Bite-sized videos that guide you through the writing process. Get the popcorn, sit back, and learn!

Video 1.5x

Lecture slides

Ready-made slides for teachers and professors that want to kickstart their lectures.

  • Academic writing
  • Citing sources
  • Methodology
  • Research process
  • Dissertation structure
  • Language rules

Accessible how-to guides full of examples that help you write a flawless essay, proposal, or dissertation.

paper

Chrome extension

Cite any page or article with a single click right from your browser.

Time-saving templates that you can download and edit in Word or Google Docs.

Template 1.5x

Help you achieve your academic goals

Whether we’re proofreading and editing , checking for plagiarism or AI content , generating citations, or writing useful Knowledge Base articles , our aim is to support students on their journey to become better academic writers.

We believe that every student should have the right tools for academic success. Free tools like a paraphrasing tool , grammar checker, summarizer and an  AI Proofreader . We pave the way to your academic degree.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Frequently asked questions

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

IMAGES

  1. 10 Best Printable Handwriting Paper Template PDF for Free at Printablee

    a paper for writing

  2. Lined Printable A4 Paper, Letter Writing, Personal Use Only.

    a paper for writing

  3. 10 Best Free Printable Letter Writing Paper For Kids PDF for Free at

    a paper for writing

  4. Printable Writing Paper With Border

    a paper for writing

  5. 10 Best Free Printable Handwriting Paper PDF for Free at Printablee

    a paper for writing

  6. 10 Best Printable Primary Writing Paper Template PDF for Free at Printablee

    a paper for writing

VIDEO

  1. Writing a Review Paper: What,Why, How?

  2. Research / Journal Paper Writing in a simple way

  3. Planning And Preparation (ENGLISH FOR RESEARCH PAPER WRITING )

  4. Session-4 Four-Day Workshop on "Paper Writing & Publication"

  5. How to make a vintage style paper

  6. What is Research Paper Writing. #learn #learningvideos #learning

COMMENTS

  1. Overview

    Writing a Paper. The Walden Writing Center staff is dedicated to ensuring your transition to a writing intensive program is a smooth one. In the pages listed on the left you will find all the information you need to master the craft of scholarly writing. If you are new to scholarly writing, it may be helpful to remember that writing is a ...

  2. The Writing Process

    Step 1: Prewriting. Step 2: Planning and outlining. Step 3: Writing a first draft. Step 4: Redrafting and revising. Step 5: Editing and proofreading. Other interesting articles. Frequently asked questions about the writing process.

  3. How To Write A Research Paper (FREE Template

    We've covered a lot of ground here. To recap, the three steps to writing a high-quality research paper are: To choose a research question and review the literature. To plan your paper structure and draft an outline. To take an iterative approach to writing, focusing on critical writing and strong referencing.

  4. How to Write a Research Paper

    A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research. Research papers are similar to academic essays, but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research ...

  5. PDF Strategies for Essay Writing

    Harvard College Writing Center 2 Tips for Reading an Assignment Prompt When you receive a paper assignment, your first step should be to read the assignment prompt carefully to make sure you understand what you are being asked to do. Sometimes your assignment will be open-ended ("write a paper about anything in the course that interests you").

  6. The Beginner's Guide to Writing an Essay

    Essay writing process. The writing process of preparation, writing, and revisions applies to every essay or paper, but the time and effort spent on each stage depends on the type of essay.. For example, if you've been assigned a five-paragraph expository essay for a high school class, you'll probably spend the most time on the writing stage; for a college-level argumentative essay, on the ...

  7. Writing a Paper

    Just get your thoughts on paper. Draft an introduction that grabs your reader's attention, states your topic, and explains the point of your paper. Write body paragraphs that logically support your thesis statement. Put the information you researched into your own words. Draft a conclusion that reflects on and summarizes the main points of ...

  8. Research Papers

    Style. The prose style of a term paper should be formal, clear, concise, and direct. Don't try to sound "academic" or "scientific.". Just present solid research in a straightforward manner. Use the documentation style prescribed in your assignment or the one preferred by the discipline you're writing for.

  9. A step-by-step guide for creating and formatting APA Style student papers

    When the paper has two authors, write the names on the same line and separate them with the word "and" (e.g., Upton J. Wang and Natalia Dominguez). When the paper has three or more authors, separate the names with commas and include "and" before the final author's name (e.g., Malia Mohamed, Jaylen T. Brown, and Nia L. Ball).

  10. 6 Ways to Write a Paper

    Pre-Writing. Download Article. 1. Choose a topic and research it. Typically, your teacher or instructor provides a list of topics to choose from for an essay. Read the assignment rubric, make sure you understand it, and pick a topic that interests you and that you think you can write a strong paper about. [1]

  11. PDF The Structure of an Academic Paper

    tutorial. That said, writing conventions vary widely across countries, cultures, and even disciplines. For example, although the hourglass model introduces the most important point right from the beginning as a guide to the rest of the paper, some traditions build the argument gradually and deliver the main idea as a punchline.

  12. Welcome to the Purdue Online Writing Lab

    Mission. The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives.

  13. APA Sample Paper

    Note: This page reflects the latest version of the APA Publication Manual (i.e., APA 7), which released in October 2019. The equivalent resource for the older APA 6 style can be found here. Media Files: APA Sample Student Paper , APA Sample Professional Paper This resource is enhanced by Acrobat PDF files. Download the free Acrobat Reader

  14. Sample papers

    These sample papers demonstrate APA Style formatting standards for different student paper types. Students may write the same types of papers as professional authors (e.g., quantitative studies, literature reviews) or other types of papers for course assignments (e.g., reaction or response papers, discussion posts), dissertations, and theses.

  15. Paper format

    To format a paper in APA Style, writers can typically use the default settings and automatic formatting tools of their word-processing program or make only minor adjustments. The guidelines for paper format apply to both student assignments and manuscripts being submitted for publication to a journal. If you are using APA Style to create ...

  16. PDF ACADEMIC WRITING

    - The Writing Process: These features show all the steps taken to write a paper, allowing you to follow it from initial idea to published article. - Into the Essay: Excerpts from actual papers show the ideas from the chapters in action because you learn to write best by getting examples rather than instructions. Much of my approach to ...

  17. What Is Academic Writing?

    Academic writing is a formal style of writing used in universities and scholarly publications. You'll encounter it in journal articles and books on academic topics, and you'll be expected to write your essays, research papers, and dissertation in academic style. Academic writing follows the same writing process as other types of texts, but ...

  18. 8 Google Employees Invented Modern AI. Here's the Inside Story

    The paper was accepted for one of the evening poster sessions. By December, the paper was generating a buzz. Their four-hour session on December 6 was jammed with scientists wanting to know more.

  19. Best Essay Writing Services: Top 5 Websites for Custom Papers in the U.S

    The refund policy is a bit strict, but hopefully, you won't need to use it. ExtraEssay: Best for Urgent Papers. ExtraEssay is probably the fastest essay writing service around. They offer a one ...

  20. What We Gained (and Lost) When Our Daughter Unplugged for a School Year

    An Art Mogul's Fall: After a dramatic rise in business and society, Louise Blouin finds herself unloading a Hamptons dream home in bankruptcy court. My 13-year-old has left her phone behind for ...

  21. How to Send a Letter or Postcard

    Postage for letters mostly depends on weight and size/shape. You can weigh your letter with a kitchen scale, postal scale, at a self-service kiosk, or at the Post Office ™ counter. TIP: As a rule of thumb, you can send 1 oz (4 sheets of printer paper and a business-sized envelope) for 1 First-Class Mail ® Forever ® stamp (currently $0.68). The postage for a large envelope (or flat) starts ...

  22. Predicting and improving complex beer flavor through machine ...

    The beer was poured through two filter papers (Macherey-Nagel, 500713032 MN 713 ¼) to remove carbon dioxide and prevent spontaneous foaming. ... All authors contributed to writing the manuscript ...

  23. How to Structure an Essay

    The basic structure of an essay always consists of an introduction, a body, and a conclusion. But for many students, the most difficult part of structuring an essay is deciding how to organize information within the body. This article provides useful templates and tips to help you outline your essay, make decisions about your structure, and ...

  24. Best Essay Writing Services: Legit Paper Writing Websites in 2024

    CollegeEssay.org - Comprehensive Writing Assistance. CollegeEssay.org stands out as an exhaustive solution for diverse essay writing needs. With a team of expert writers adept at handling ...

  25. Ministry of Education National Grade Six Mock Assessment # 2

    1. This paper contains six questions. You are required to answer Question 1 and three others. 2. You have 60 minutes of working time and 10 minutes of reading time. Working may begin during reading time. Each question is worth 5 marks. Note: You must answer only four questions. Be sure to answer the four questions completely.

  26. Best Essay Writing Services Worth Considering in 2024

    It's like getting a bonus that adds more value, making MyPerfectWords.com the top choice for students who want both quality and budget-friendly help. High School: Starting at $11.00 / page ...

  27. Scribbr

    Everything you need to write an A-grade paper. Free resources used by 5,000,000 students every month. Videos. Bite-sized videos that guide you through the writing process. Get the popcorn, sit back, and learn! Lecture slides. Ready-made slides for teachers and professors that want to kickstart their lectures.

  28. Building Meta's GenAI Infrastructure

    The future of Meta's AI infrastructure. These two AI training cluster designs are a part of our larger roadmap for the future of AI. By the end of 2024, we're aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100s as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.