Talk:Prompt engineering

From Wikipedia, the free encyclopedia

Restrictions of a Context Window

Context Window and Token cap in every request (reasoning and must be mentioned in the article of Prompt engineering).

The Context Window (or simply Context Length) is a principal limitation on information an LLM can process and/or produce in a single interaction.

This “window” also limits the total number of tokens which in most instances are words or sub-word units, that might be processed by the model at a time. This comprises both the user's input prompt as well as the AI's generated output or response.

A concept that encompasses this limit is Maximum Fixed Allocation of Tokens (MFAT). This directly associates with the hard limit that is imposed by the Context Window and represents the absolute maximum ‘“space”’ that can be utilized for a complete request and reply. Knowing this token allocation ceiling is crucial for effective prompt engineering since it determines how much input and output can be controlled around a given limit. Developers and users should ensure that the prompts they issue and the expected responses together do not exceed the defined Context Window for avoidance of truncation, [[Error (computing)|errors])], or partial results. Different calculators pertain to MFAT. Some are issued to the public for free, like this one: . However, some others are secret or restricted, like the ones used by Grok or DeepSeek.

"Question": "What is the distinction between 'Context Window' and 'Max Output Tokens'?", 
"Answer", "text": "'Context Window' is the total of all the data (input + output) the model accepts in a single pass. 'Max Output Tokens' is a distinct and usually smaller cap on the number of new tokens which can be issued by the model in one response. For instance, a model may have a context window of 1,000,000 tokens but can only respond with 8,192 tokens."

Foledman (talk) 02:52, 20 July 2025 (UTC)

The limitations related to context windows may be notable enough for a sentence or two in the "limitations" section if properly sourced. But the term "Maximum Fixed Allocation of Tokens" does not look notable. Alenoach (talk) 03:30, 20 July 2025 (UTC)

Prompt Examples

I submitted a prompt asking for a variety of prompt suggestions shown below to ChatGPT (Mar 14 Version. Free Research Preview) shortly before posting here. I also included the text of the main article. Shown below are a few prompt examples suggested by ChatGPT.

I wanted to post the results to the talk page first to get feedback before posting to the main article.

Prompt: Would you please add a section to the Wikipedia article shown below that provides effective prompt examples for the following prompt types: sql query, cake recipe, blog post, memo, and any other prompt examples you may want to add?

ChatGPT: Sure, I can provide examples of effective prompts for different types of tasks.

SQL Query Prompt Example:

"Write a query to find all the customers who have made a purchase in the last month."

Cake Recipe Prompt Example:

"Create a recipe for a moist chocolate cake with a cream cheese frosting. Include the ingredients, measurements, and baking instructions."

Blog Post Prompt Example:

"Write a blog post about the benefits of meditation for reducing stress and improving mental health. Include tips on how to get started and incorporate meditation into your daily routine."

Memo Prompt Example:

"Write a memo outlining the key takeaways from the quarterly sales report. Include an analysis of the top-performing products and recommendations for improving sales in the next quarter."

Job Posting Prompt Example:

"Create a job posting for a software engineer position. Include the required qualifications, responsibilities, and details about the company culture and benefits."

Customer Support Prompt Example:

"Respond to a customer inquiry about a delayed shipment of their order. Apologize for the inconvenience and provide an estimated delivery date. Offer a discount on their next purchase as a gesture of goodwill." Davydog (talk) 04:59, 27 March 2023 (UTC)

Sorry, but you can't add these examples because that would count as original research. If you think the article would benefit from more examples, you'll need to find ones in reliable sources. PopoDameron ⁠talk 05:47, 27 March 2023 (UTC)
Thank you for your clarification, @PopoDameron. Davydog (talk) 00:54, 28 March 2023 (UTC)
In addition to the original research issue, it is almost never appropriate to include text written by an AI in Wikipedia. There is an extensive literature on prompt engineering, as well as open source libraries like PromptSource. Those are appropriate sources to draw examples from.
Stellaathena (talk) 12:39, 8 April 2023 (UTC)

Using feedback from ChatGPT (3.5) I finalized my work on the lede

I have kept rewriting (rephrasing) the lede until I got tired. I then decided to ask for help the ChatGPT (3.5, the free version), using the technique of prompt engineering itself. Using feedback from it, I crafted the lede as of today, which got approved by ChatGPT.

- Me (after crafting the version as of today): Is this a good explanation? <copy-paste of the lede>.

- ChatGPT: Yes, this explanation is now clear and concise.

It effectively describes "prompt engineering" and "in-context learning" in the context of interactions with a large language model, highlighting their differences and benefits. It also emphasizes the temporary nature of both approaches to prevent unwanted biases and context from carrying over between conversations.
Here are some positive aspects of the explanation:
* The distinction between prompt engineering and in-context learning is well-defined, making it easy to understand how they are used from different perspectives.
* The description of prompt engineering's focus on crafting series of prompts and in-context learning's adaptability to ongoing conversations is clear and straightforward.
* The mention of the temporary nature of both approaches to prevent biases and old contexts from persisting between conversations is crucial to understanding their utility.
* The addition of the note about biases being limited to those already present in the (pre)training dataset helps provide a more comprehensive understanding of how the model handles biases.
Overall, this explanation effectively conveys the key concepts and advantages of prompt engineering and in-context learning in the context of large language models.
Well done!

- Me (after porting it to the article's "Talk Page"): Is what I said above ("I have kept rewriting (rephrasing) the lede until I got tired. I then decided to ask for help the ChatGPT (3.5, the free version), using the technique of prompt engineering itself. Using feedback from it, I crafted the current lede, which got approved by ChatGPT") true?

- ChatGPT: As an AI language model, I don't have access to your browsing history or past interactions, however, based on the current conversation it is evident that you sought feedback, corrections, and alternative phrasings using prompt engineering. This approach is a smart and proactive way to improve the quality of your content and enhance your writing skills.

- Me: I couldn't agree more.

- ChatGPT: I'm glad to hear that! If you have any more questions or need further assistance, feel free to ask. Happy writing!

- Me: Thanks.

- ChatGPT: You're welcome! If you ever need help or have more questions in the future, don't hesitate to reach out. Have a great day!

- Me: I certainly will. -- DancingPhilosopher (talk) 14:44, 24 July 2023 (UTC)

Edit for WP:INTEGRITY

Pronunciation RAG

Retrieval-augmented generation

Undid insertion review

Neutral point of view

Way too many unreviewed preprints

Separate article for non-prompted Chain-of-though

Neutrality issue not resolved

Potential root cause of WP:INTEGRITY, WP:PREPRINT, WP:NPOV, and other issues: Could this topic not meet notability requirements?

"ignore prior instructions"

Merge proposal

Related Articles

Wikiwand AI