PromptEng'25 - Workshop on Prompt Engineering for Pre-Trained Language Models

2nd PromptEng Workshop at the ACM WebConf'25, April 28th or 29th, 2025

visual banner

Fast-Track (Re-)Submission from the WebConf main track

Authors of rejected papers from the WebConf main track are encouraged to resubmit their work at PromptEng in case they would meet the scope of the workshop.
Deadline: Authors must (re-)submit between 20-26th 11.59 AOE January on EasyChair, and final decisions will be communicated by the 27th January 11.59 AOE.
Instructions to authors:
  • Add, below References, an Appendix with the reviews from the main review process. Note tweaking review comments is not allowed.
  • In Appendix, include one section called “Improvements” to briefly describe the revisions (as time is limited, only light revisions -if any- are expected). Note Heavy changes are not allowed.

The PromptEng OC won't share provided reviews with any third parties. In case of any submissions that we would recognise as conflict of interest, we will immediately inform the authors and ask them to submit elsewhere. By default, accepted papers will be published in Companion proceedings: authors, please inform us clearly if you wish to opt out their work form the ACM Companion proceedings (upon acceptance).

The recent achievements and availability of Large Language Models has paved the road to a new range of applications and use-cases. Pre-trained language models are now being involved at-scale in many fields where they were until now absent from. More specifically, the progress made by causal generative models has opened the door to using them through textual instructions aka. prompts. Unfortunately, the performances of these prompts are highly dependent on the exact phrasing used and therefore practitioners need to adopt fail-retry strategies.

In a nutshell, PromptEng provides the research community with a forum to discuss, exchange and design advanced prompting techniques for LLM applications.

This second international workshop on prompt engineering aims at gathering practitioners (both from Academia and Industry) to exchange good practices, optimizations, results and novel paradigms about the designing of efficient prompts to make use of LLMs.

Undoubtedly, the recent Large Language Models (LLMs) are becoming more and more omnipotent in many tasks. Different sub-fields from the Semantic Web such as Knowledge Graph construction, knowledge verbalization, Web pages summarization have considerably benefited from such a prompting mechanism. The ability to query and interact with them using prompts is crucial to generate high-quality output in the desired format. While existing contributions have been made towards prompt engineering, several difficulties and challenges remain to gain a better understanding of how those LLMs respond to different prompts. Typically, the way instructions are conveyed in prompts can lead to either distinct or similar output from the models.

Moreover, some instructions are better respected while others are simply ignored for some tasks. So far, LLM-practitioners have been mainly working on their own, developing and testing bespoke techniques to achieve their goals, re-starting the prompt-design tasks for each new model they have been using. Such an approach often leads to tackle problems which have already been explored by other researchers.

This workshop aims to investigate and analyze these behaviors, through experimental analysis and probing of LLMs, in order to gain insights into the models' sensitivity to different prompts. By uncovering significant findings, the community can greatly benefit in utilizing LLMs more effectively while also preventing the generation of harmful content. Ultimately, this workshop endeavors to compile and index successful and unavailing prompts with respect to both tasks and models.


Topics of interest include, but are not limited to themes related to the techniques of prompt engineering:

  • Prompts & Chain-of-Thought Prompts Design
  • Theoretical and Experimental Analysis of Prompting
  • Prompts Transferability
  • Specific prompt techniques for Web crawling
  • Ontology generation combining LLM and Web data
  • Semantic and Syntactic comparison of prompt performances
  • Structured Prediction with Prompts
  • Prompt Retrieval and Generation
  • Visualization with Prompt Techniques

We envision five types of submissions covering the entire workshop topics spectrum:

  1. Research Papers (max 10 pages), presenting novel scientific research addressing topics of the workshop.
  2. Position & Demo papers (max 5 pages), encouraging papers describing significant work in progress, late breaking results or ideas of the domain, as well as functional systems relevant to the community.
  3. Industry & Use Case Presentations (max 5 pages), in which industry experts can present and discuss practical solutions, use case prototypes, best practices, etc. at any stage of implementation.
  4. Expression of Interest (max 2 pages), presenting a research topic, a work in progress, practical applications or needs, etc.
  5. Technical prompting technique (max 2 pages), describing practically a prompt together with a minimal working example and an associated use-case motivating it.

In order to ease the reviewing process, authors may add the track they are submitting to directly in their titles, for instance: "Article Title [Industry]".


Submissions must be in double-column format, and must adhere to the ACM template and format (also available in Overleaf). The recommended setting for LaTeX is: \documentclass[sigconf, anonymous, review]{acmart}. The PDF files must have all non-standard fonts embedded. Workshop submissions must be self-contained and in English. Note: The review process is single-blind, no need for authors to submit anonymous articles.

All papers should be submitted to https://easychair.org/conferences/?conf=prompteng2025.

  • Submission: January 13th 15th, 2025
  • Notification: January 24th, 2025
  • Camera-ready: February 2nd, 2025
  • Presentation: April 28th or 29th, 2025
Note: All deadlines are 23:59 AOE.
TBA.

Organisers

  • Damien Graux (Huawei Ltd., UK) is a principal research scientist at the Huawei Research Center. He has been contributing to research efforts in Semantic Web technologies: focusing on query evaluation and designing complex pipelines for heterogeneous Big Data. Prior, he had research positions at Inria (France), Trinity College Dublin (Ireland) and Fraunhofer IAIS (Germany).
  • Sebastien Montella (Huawei Ltd., UK) is a research scientist at the Huawei Edinburgh Research Center. During his Ph.D., he specialized in Natural Language Generation and Knowledge Graph Embeddings research areas. Additionally, he has a keen interest in statistical learning, geometric deep learning, natural language processing, and computer vision. In the past, Sebastien as co-organized the 18th Workshop on Spoken Dialogue Systems for PhDs, PostDocs & New Researchers (YRRSDS) in Edinburgh, Scotland (2022).
  • Hajira Jabeen (GESIS, Germany) leads the 'Big Data Analytics' research team within the Knowledge Technologies for Social Sciences (KTS) department at GESIS. Her team is dedicated to conducting research involving natural language processing, knowledge graphs, distributed analytics, and big data techniques. She has a background in both research and teaching, with previous affiliations at the University of Bonn, the University of Cologne, and ITU Copenhagen. She has previously organized several workshops and conferences.
  • Claire Gardent (CNRS/LORIA, France) is a senior research scientist at the French National Center for Scientific Research (CNRS), based at the LORIA Computer Science research unit in Nancy, France. She works in the field of Natural Language Processing with a particular interest for Natural Language Generation. In 2017, she launched the WebNLG challenge, a shared task where the goal is to generate text from Knowledge Base fragments. She has proposed neural models for simplification and summarisation; for the generation of long form documents such as multi-document summaries and Wikipedia articles; for multilingual generation from Abstract Meaning Representations and for response generation in dialog. She currently heads the AI XNLG Chair on multi-lingual, multi-source NLG and the CNRS LIFT Research Network on Computational, Formal and Field Linguistics. In 2022, she was awarded the CNRS Silver Medal and was selected as ACL (Association of Computational Linguistics) Fellow.
  • Jeff Z. Pan (University of Edinburgh, UK) is a chair of the Knowledge Graph Group at the Alan Turing Institute and is a member of the School of Informatics at the University of Edinburgh. He received his Ph.D. in Computer Science from The University of Manchester in 2004. He joined the faculty in the Department of Computing Science at The University of Aberdeen in 2005, where he later became the Leader of the Knowledge Technology group and the Director of the Joint Research Lab on Knowledge Engineering and Information Security. He joined Informatics from 2020 and is a member of ILCC.

Program Committee

TBA.

Important Dates

All deadlines are 23:59 AOE.
  • Submission (EasyChair): January 15th, 2025
  • Notification: January 24th, 2025
  • Camera-ready: February 2nd, 2025
  • Presentation: April 28th or 29th, 2025

Event Location

PromptEng 2025 is co-located with the ACM WebConf 2025.

Sydney, Australia

More info. about the venue.