PromptEng'26 - Workshop on Prompt Engineering for Pre-Trained Language Models

3rd PromptEng Workshop at the ACM WebConf'26, April 13th-14th, 2026

visual banner

The recent achievements and availability of Large Language Models has paved the road to a new range of applications and use-cases. Pre-trained language models are now being involved at-scale in many fields where they were until now absent from. More specifically, the progress made by causal generative models has opened the door to using them through textual instructions aka. prompts. Unfortunately, the performances of these prompts are highly dependent on the exact phrasing used and therefore practitioners need to adopt fail-retry strategies.

In a nutshell, PromptEng provides the research community with a forum to discuss, exchange and design advanced prompting techniques for LLM applications.

This third international workshop on prompt engineering aims at gathering practitioners (both from Academia and Industry) to exchange good practices, optimizations, results and novel paradigms about the designing of efficient prompts to make use of LLMs.

Undoubtedly, the recent Large Language Models (LLMs) are becoming more and more omnipotent in many tasks. Different sub-fields from the Semantic Web such as Knowledge Graph construction, knowledge verbalization, Web pages summarization have considerably benefited from such a prompting mechanism. The ability to query and interact with them using prompts is crucial to generate high-quality output in the desired format. While existing contributions have been made towards prompt engineering, several difficulties and challenges remain to gain a better understanding of how those LLMs respond to different prompts. Typically, the way instructions are conveyed in prompts can lead to either distinct or similar output from the models.

Moreover, some instructions are better respected while others are simply ignored for some tasks. So far, LLM-practitioners have been mainly working on their own, developing and testing bespoke techniques to achieve their goals, re-starting the prompt-design tasks for each new model they have been using. Such an approach often leads to tackle problems which have already been explored by other researchers.

This workshop aims to investigate and analyze these behaviors, through experimental analysis and probing of LLMs, in order to gain insights into the models' sensitivity to different prompts. By uncovering significant findings, the community can greatly benefit in utilizing LLMs more effectively while also preventing the generation of harmful content. Ultimately, this workshop endeavors to compile and index successful and unavailing prompts with respect to both tasks and models.


Topics of interest include, but are not limited to themes related to the techniques of prompt engineering:

  • Prompts & Chain-of-Thought Prompts Design
  • Theoretical and Experimental Analysis of Prompting
  • Prompts Transferability
  • Specific prompt techniques for Web crawling
  • Ontology generation combining LLM and Web data
  • Semantic and Syntactic comparison of prompt performances
  • Structured Prediction with Prompts
  • Prompt Retrieval and Generation
  • Visualization with Prompt Techniques

We envision five types of submissions covering the entire workshop topics spectrum:

  1. Research Papers (max 10 pages), presenting novel scientific research addressing topics of the workshop.
  2. Position & Demo papers (max 5 pages), encouraging papers describing significant work in progress, late breaking results or ideas of the domain, as well as functional systems relevant to the community.
  3. Industry & Use Case Presentations (max 5 pages), in which industry experts can present and discuss practical solutions, use case prototypes, best practices, etc. at any stage of implementation.
  4. Expression of Interest (max 2 pages), presenting a research topic, a work in progress, practical applications or needs, etc.
  5. Technical prompting technique (max 2 pages), describing practically a prompt together with a minimal working example and an associated use-case motivating it.

In order to ease the reviewing process, authors may add the track they are submitting to directly in their titles, for instance: "Article Title [Industry]".


Submissions must be in double-column format, and must adhere to the ACM template and format (also available in Overleaf). The recommended setting for LaTeX is: \documentclass[sigconf, anonymous, review]{acmart}. The PDF files must have all non-standard fonts embedded. Workshop submissions must be self-contained and in English. Note: The review process is single-blind, no need for authors to submit anonymous articles.

All papers should be submitted to https://submission.platform.example/.

  • Submission: December 18th, 2025
  • Notification: January 13th, 2026
  • Camera-ready: February 2nd, 2026
  • Presentation: April 13th-14th, 2026
Note: All deadlines are 23:59 AOE.

TBC.

Organisers

  • Damien Graux (EcoVadis, UK) leads a team of research scientists at EvoVadis that is specialised in AI/ML. He has been contributing to research efforts in Knowledge Computing technologies: focusing inter alia on Semantic Web, designing complex pipelines for heterogeneous Big Data and LLM-based knowledge management. Prior to this, he had research positions at Huawei R&D (UK), at Inria (France), Trinity College Dublin (Ireland) and Fraunhofer IAIS (Germany). He has been involved in the organisations of many international workshops at major conferences such as the LASCAR (co-located with ESWC) or the MEPDaW (co-located with ISWC) series, or recently NORA at NeurIPS.
  • Sebastien Montella (Huawei Ltd., UK) is a research scientist at the Huawei Edinburgh Research Center. During his Ph.D., he specialized in Natural Language Generation and Knowledge Graph Embeddings research areas. Additionally, he has a keen interest in statistical learning, geometric deep learning, natural language processing, and computer vision. In the past, Sebastien as co-organized the 18th Workshop on Spoken Dialogue Systems for PhDs, PostDocs & New Researchers (YRRSDS) in Edinburgh, Scotland (2022).
  • Hajira Jabeen (UniKlinik Cologne, Germany) leads the 'AI in Research Data Management' team at the Institute for Biomedical Informatics. Her team leverages artificial intelligence and large language models (LLMs) to enhance research data management practices, particularly in the biomedical field. They focus on developing scalable, AI-driven tools and workflows that improve data organization, integration, and analysis, driving innovative, data-centric solutions. Hajira has a diverse background in research and teaching, with prior affiliations at the University of Bonn, the University of Cologne, and ITU Copenhagen. She has also organized numerous workshops and conferences in data science and informatics.

Program Committee

TBC.

Important Dates

All deadlines are 23:59 AOE.
  • Submission (Link): December 18th, 2025
  • Notification: January 13th, 2026
  • Camera-ready: February 2nd, 2026
  • Presentation: April 13th-14th, 2026

Event Location

PromptEng 2026 is co-located with the ACM WebConf 2026.

Dubai, United Arab Emirates

More info. about the venue.