2nd PromptEng Workshop at the ACM WebConf'25, April 28th or 29th, 2025
The recent achievements and availability of Large Language Models has paved the road to a new range of applications and use-cases. Pre-trained language models are now being involved at-scale in many fields where they were until now absent from. More specifically, the progress made by causal generative models has opened the door to using them through textual instructions aka. prompts. Unfortunately, the performances of these prompts are highly dependent on the exact phrasing used and therefore practitioners need to adopt fail-retry strategies.
In a nutshell, PromptEng provides the research community with a forum to discuss, exchange and design advanced prompting techniques for LLM applications.
This second international workshop on prompt engineering aims at gathering practitioners (both from Academia and Industry) to exchange good practices, optimizations, results and novel paradigms about the designing of efficient prompts to make use of LLMs.
Undoubtedly, the recent Large Language Models (LLMs) are becoming more and more omnipotent in many tasks. Different sub-fields from the Semantic Web such as Knowledge Graph construction, knowledge verbalization, Web pages summarization have considerably benefited from such a prompting mechanism. The ability to query and interact with them using prompts is crucial to generate high-quality output in the desired format. While existing contributions have been made towards prompt engineering, several difficulties and challenges remain to gain a better understanding of how those LLMs respond to different prompts. Typically, the way instructions are conveyed in prompts can lead to either distinct or similar output from the models.
Moreover, some instructions are better respected while others are simply ignored for some tasks. So far, LLM-practitioners have been mainly working on their own, developing and testing bespoke techniques to achieve their goals, re-starting the prompt-design tasks for each new model they have been using. Such an approach often leads to tackle problems which have already been explored by other researchers.
This workshop aims to investigate and analyze these behaviors, through experimental analysis and probing of LLMs, in order to gain insights into the models' sensitivity to different prompts. By uncovering significant findings, the community can greatly benefit in utilizing LLMs more effectively while also preventing the generation of harmful content. Ultimately, this workshop endeavors to compile and index successful and unavailing prompts with respect to both tasks and models.
Topics of interest include, but are not limited to themes related to the techniques of prompt engineering:
We envision five types of submissions covering the entire workshop topics spectrum:
In order to ease the reviewing process, authors may add
the track they are submitting to directly in their titles,
for instance: "Article Title [Industry]
".
Submissions must be in double-column format, and must adhere
to the ACM
template and format (also available
in Overleaf). The recommended setting for LaTeX
is: \documentclass[sigconf, anonymous, review]{acmart}
.
The PDF files must have all non-standard fonts
embedded. Workshop submissions must be
self-contained and in English. Note: The
review process is single-blind, no need for authors
to submit anonymous articles.
All papers should be submitted to https://easychair.org/conferences/?conf=prompteng2025.
PromptEng 2025 is co-located with the ACM WebConf 2025.
Sydney, Australia
More info. about the venue.