2nd PromptEng Workshop at the ACM WebConf'25, April 29th, 2025
The recent achievements and availability of Large Language Models has paved the road to a new range of applications and use-cases. Pre-trained language models are now being involved at-scale in many fields where they were until now absent from. More specifically, the progress made by causal generative models has opened the door to using them through textual instructions aka. prompts. Unfortunately, the performances of these prompts are highly dependent on the exact phrasing used and therefore practitioners need to adopt fail-retry strategies.
In a nutshell, PromptEng provides the research community with a forum to discuss, exchange and design advanced prompting techniques for LLM applications.
This second international workshop on prompt engineering aims at gathering practitioners (both from Academia and Industry) to exchange good practices, optimizations, results and novel paradigms about the designing of efficient prompts to make use of LLMs.
Undoubtedly, the recent Large Language Models (LLMs) are becoming more and more omnipotent in many tasks. Different sub-fields from the Semantic Web such as Knowledge Graph construction, knowledge verbalization, Web pages summarization have considerably benefited from such a prompting mechanism. The ability to query and interact with them using prompts is crucial to generate high-quality output in the desired format. While existing contributions have been made towards prompt engineering, several difficulties and challenges remain to gain a better understanding of how those LLMs respond to different prompts. Typically, the way instructions are conveyed in prompts can lead to either distinct or similar output from the models.
Moreover, some instructions are better respected while others are simply ignored for some tasks. So far, LLM-practitioners have been mainly working on their own, developing and testing bespoke techniques to achieve their goals, re-starting the prompt-design tasks for each new model they have been using. Such an approach often leads to tackle problems which have already been explored by other researchers.
This workshop aims to investigate and analyze these behaviors, through experimental analysis and probing of LLMs, in order to gain insights into the models' sensitivity to different prompts. By uncovering significant findings, the community can greatly benefit in utilizing LLMs more effectively while also preventing the generation of harmful content. Ultimately, this workshop endeavors to compile and index successful and unavailing prompts with respect to both tasks and models.
Topics of interest include, but are not limited to themes related to the techniques of prompt engineering:
We envision five types of submissions covering the entire workshop topics spectrum:
In order to ease the reviewing process, authors may add
the track they are submitting to directly in their titles,
for instance: "Article Title [Industry]
".
Submissions must be in double-column format, and must adhere
to the ACM
template and format (also available
in Overleaf). The recommended setting for LaTeX
is: \documentclass[sigconf, anonymous, review]{acmart}
.
The PDF files must have all non-standard fonts
embedded. Workshop submissions must be
self-contained and in English. Note: The
review process is single-blind, no need for authors
to submit anonymous articles.
All papers should be submitted to https://easychair.org/conferences/?conf=prompteng2025.
PromptEng will take place on 29th of April
2025 afternoon from 1:30pm to 5:00pm. All
hours are local Sydney time.
(In the meantime,
please, don't forget
to register
and attend... ☺)
Title: Release the Powers of Prompt Tuning: Cross-Modality Prompt Transfer
Dr. Zhen Fang, Lecturer/Assistant Professor, University of Technology Sydney, Australia
https://fang-zhen.github.io/index.html
Abstract: Prompt tuning has redefined the paradigm
of adapting pre-trained transformers, achieving
parameter-efficient adaptation while preserving
competitive performance—a breakthrough particularly
impactful for deploying large models in
resource-constrained scenarios. However, emerging research
uncovers a deeper capability: these learned prompts
exhibit cross-task transferability. This discovery
catalyses prompt transfer, where prompts pretrained on
data-rich source tasks are reused on target tasks within
the same modality, effectively mitigating prompt tuning's
sensitivity to data scarcity and accelerating adaptation
efficiency. Yet, a critical limitation persists:
conventional prompt transfer assumes intra-modality
transfer, failing when target tasks reside in data-scarce
modalities (e.g., medical imaging or satellite visuals)
that lack sufficient data to train transferrable
prompts.
In this talk, we unveil a pioneering
framework: cross-modality prompt transfer, where source
prompts can be trained on a different but data-rich
modality. We focus on demonstrating its feasibility and
efficiency, and present novel criteria for selecting
optimal source prompts when multiple candidate tasks are
available.
Bio: Dr. Zhen Fang is a lecturer at the University of Technology Sydney, Australia. He received his Ph.D. from the University of Technology Sydney in 2021. His research focuses on the algorithmic and theoretical foundations of transfer learning and out-of-distribution learning. He has served as an Area Chair for NeurIPS and ACM MM. He is the recipient of the Australasian AI Emerging Researcher Award, the NeurIPS 2022 Outstanding Reviewer Award, and the Australian Research Council Discovery Early Career Researcher Award (DECRA 2025). His first author work on out-of-distribution learning received the Outstanding Paper Award at NeurIPS 2022.
TBA.
Time (UTC+11) | Title |
---|---|
At 1:30pm | Keynote #1 (30' Presentation + 5' QA) |
1:30pm-1:35pm | Opening words |
1:35pm-2:10pm | Release the Powers of
Prompt Tuning: Cross-Modality Prompt
Transfer By Dr. Zhen Fang, Assistant Professor, University of Technology Sydney, Australia |
At 2:10pm | Paper Session I (8' Presentation + 2' QA) |
2:10pm-2:20pm | Engineering Prompts for
Spatial Questions By Nicole Schneider, Nandini Ramachandran, Kent O'Sullivan and Hanan Samet |
2:20pm-2:30pm | The Iterative Proof-Driven
Development LLM Prompt By Aneesha Bakharia |
2:30pm-2:40pm | EdgePrompt: Engineering
Guardrail Techniques for Offline LLMs in K-12
Educational Settings By Riza Alaudin Syah, Christoforus Yoga Haryanto, Emily Lomempow, Krishna Malik and Irvan Putra |
2:40pm-2:50pm | A Concept for Integrating an
LLM-Based Natural Language Interface for
Visualizations Grammars By Adrian Jobst, Daniel Atzberger, Mariia Tytarenko, Willy Scheibel, Jürgen Döllner and Tobias Schreck |
2:50pm-3:00pm | Analyzing the Sensitivity of
Prompt Engineering Techniques in Natural Language
Interfaces for 2.5D Software Visualization By Daniel Atzberger, Adrian Jobst, Mariia Tytarenko, Willy Scheibel, Jürgen Döllner and Tobias Schreck |
3:00pm-3:30pm | Break |
At 3:30pm | Keynote #2 (25' Presentation + 5' QA) |
3:30pm-4:00pm | TBA. |
At 4:00pm | Paper Session II (12' Presentation + 3' QA) |
4:00pm-4:15pm | Empirical Evaluation of
Prompting Strategies for Fact Verification Tasks By Mohna Chakraborty, Adithya Kulkarni and Qi Li |
4:15pm-4:30pm | LLM Shots: Best Fired at
System or User Prompts? By Umut Halil, Jin Huang, Damien Graux and Jeff Z. Pan |
4:30pm-4:45pm | Leveraging Prompt
Engineering with Lightweight Large Language Model to
Label and Extract Clinical Information from
Radiology Report By Chayan Mondal, Duc-Son Pham, Ashu Gupta, Tele Tan and Tom Gedeon |
4:45pm-5:00pm | From Tables to Triples: A
Prompt Engineering Approach By Maria Angela Pellegrino and Gabriele Tuozzo |
At 5:00pm | Wrap-up |
Name | Affiliation |
---|---|
Russa Biswas | Aalborg University, Copenhagen, Denmark |
Quentin Brabant | Orange Labs, France |
Christophe Cerisara | LORIA, France |
Thibault Cordier | Quantmetry, France |
Soumyabrata Dev | University College Dublin, Ireland |
Btissam Er-Rahmadi | Huawei Technologies RnD, UK |
Shrestha Ghosh | University of Tübingen, Germany |
Lina Maria Rojas-Barahona | Orange-Labs, France |
Gerard de Melo | HPI, University of Potsdam, Germany |
Anastasia Shimorina | Orange Labs, France |
Wendy Zhou | University of Edinburgh, UK |
PromptEng 2025 is co-located with the ACM WebConf 2025.
Sydney, Australia
More info. about the venue.