30 lines
3.4 KiB
Plaintext
30 lines
3.4 KiB
Plaintext
**Papers MUST:**
|
|
|
|
1. **Focus primarily on the engineering, design, or optimization of prompts *specifically* for Large Language Models (LLMs).** (This clarifies that the focus is prompt engineering *for LLMs*, not just any AI system.)
|
|
2. **Investigate, analyze, or propose methods for improving LLM performance *through the manipulation of textual input prompts*.** (This emphasizes the "how" of prompt engineering and excludes papers that might just mention prompts in passing.)
|
|
3. **Provide concrete examples of prompts and demonstrate their impact on LLM output, replicable with publicly available LLMs.** (This maintains the practical, replicable aspect while emphasizing the prompt-output relationship.)
|
|
|
|
**Papers MUST NOT:**
|
|
|
|
1. **Focus primarily on the development of new LLM architectures or training methods.** (This explicitly excludes papers about building or training LLMs, focusing the criteria on using them.)
|
|
2. **Be primarily concerned with applications of generative AI *other than text generation driven by LLMs*, such as image, video, or audio generation.** (This clearly differentiates LLMs from other generative models like text-to-image or text-to-video.)
|
|
3. **Be primarily concerned with medical, automotive (self-driving), or ethical subjects.** (This exclusion remains, but is lower priority given the more specific focus.)
|
|
|
|
**Additional Instructions:**
|
|
|
|
* **The core subject of the paper must be prompt engineering for text-based interactions with LLMs. Papers that mention prompts but do not make them the central focus should be rejected.** (This emphasizes the primary importance of prompt engineering.)
|
|
* **Reject papers that focus on using LLMs as components within larger systems where prompt engineering is not the primary concern (e.g., using an LLM as part of a no-code platform or within a multi-agent system).** (This addresses the issue encountered with the example abstract.)
|
|
* **Favor papers that explore novel prompt engineering techniques, provide comparative analyses of different prompting strategies, or offer frameworks for systematic prompt development.** (This provides guidance on what constitutes a *strong* accept.)
|
|
* **Analyze each paper's title and abstract carefully to determine how many criteria are met. A paper must meet all of the "MUST" criteria to be considered for acceptance.** (This further clarifies the requirements for acceptance, as the previous requirement of "two or three" is no longer applicable.)
|
|
* **Err on the side of caution. When in doubt, reject.** (This rule makes the selection more exclusive, suitable for cases where there might be too many submissions to process effectively.)
|
|
|
|
**Example of Clear Rejection:**
|
|
|
|
* A paper about a new No-Code platform that utilizes LLMs as one of its components would be rejected, even if it mentions prompts, because the primary focus is the platform, not prompt engineering.
|
|
* A paper that details a new method for fine-tuning LLMs would be rejected as the criteria is focused on prompt engineering and not new training methods.
|
|
|
|
**Example of Clear Acceptance:**
|
|
|
|
* A paper that presents a new technique for automatically generating prompts that elicit specific types of responses from an LLM, providing detailed examples and comparisons, would be accepted.
|
|
* A paper comparing the effectiveness of various prompt structures (e.g., zero-shot, few-shot, chain-of-thought) for a particular task, offering insights into optimal prompt design, would be accepted.
|