Prompt Injection - Sched-yule conflict Tryhackme Writeup
Master prompt-injection techniques and AI exploitation concepts with this TryHackMe room. Learn how attackers manipulate LLM outputs, explore real-world vulnerabilities, and strengthen your defensive skills through hands-on cybersecurity challenges.
OFFENSIVE SECURITYMETHODOLOGYINPUT MANIPULATIONPENETRATION TESTERGPTSOCIAL ENGINEERING TOOLKITPROMPT INJECTIONAI SECURITYTRYHACKME WALKTHROUGHCYBERSECURITY CHALLENGESTHREAT DETECTIONTRYHACKME ROOM SOLUTIONSTRYHACKME ANSWERSCYBERSECURITY LABSCYBERSECURITYETHICAL HACKINGCHAT GPTTRYHACKMEAIOPEN-SOURCE TOOLSLLM SECURITYHANDS ON SECURITY LABS
Jawstar
12/8/20253 min read


Task 2 : Agentic AI Hack

Introduction
Artificial intelligence has come a long way from chatbots that respond only to one stimulus, to acting independently, planning, executing, and carrying out multi-step processes on their own. That's what we call agentic AI (or autonomous agents), which prompts us to shift the types of things we can get AI to do for us and the nature of the risk we must manage.
But before we begin, let's take a moment to understand a few key concepts about large language models (LLMs).
This foundation will help us see why some techniques are used to improve their reasoning capabilities.
Large Language Models (LLMs)
Large language models are the basis of many current AI systems. They are trained on massive collections of text and code, which allows them to produce human-like answers, summaries, and even generate programs or stories.
LLMs have restrictions that prevent them from going beyond their built-in abilities, which limits them. They cannot act outside their text box, and their training only lasts up to a certain point in time. Because of this, they may invent facts, miss recent events, or fail at tasks that require real-world actions.
Some of the main traits of LLMs are:
Text generation: They predict the next word step by step to form complete responses.
Stored knowledge: They hold a wide range of information from training data.
Follow instructions: They can be tuned to follow prompts in ways closer to what people expect.
Since LLMs mainly follow text patterns, they can be tricked. Common risks include prompt injection, jailbreaking, and data poisoning, where attackers shape prompts or data to force the model to produce unsafe or unintended results.
These gaps in control explain why the next step was to move towards agentic AI, where LLMs are given the ability to plan, act, and interact with the outside world.
Agentic AI
As mentioned, agentic AI refers to AI with agency capabilities, meaning that they are not restricted by narrow instructions, but rather capable of acting to accomplish a goal with minimal supervision. For example, an agentic AI will try to:
Plan multi-step plans to accomplish goals.
Act on things (run tools, call APIs, copy files).
Watch & adapt, adapting strategy when things fail or new knowledge is discovered.
ReAct Prompting & Context-Awareness
All that was mentioned is possible due to the fact that agentic AI uses chain-of-thought (CoT) reasoning to improve its ability to perform complex, multi-step tasks autonomously. CoT is a prompt-engineering method designed to improve the reasoning capabilities of large language models (LLMs), especially for tasks that require complex, multi-step thinking. The chain-of-thought (CoT) handles the execution of complex reasoning tasks through intermediate reasoning steps.
Chain-of-thought (CoT) prompting demonstrated that large language models can generate explicit reasoning traces to solve tasks requiring arithmetic, logic, and common-sense reasoning. However, CoT has a critical limitation: because it operates in isolation, without access to external knowledge or tools, it often suffers from fact hallucination, outdated knowledge, and error propagation.
ReAct(Reason + Act) addresses this limitation by unifyingreasoningandactingwithin the same framework. Instead of producing only an answer or a reasoning trace, a ReAct-enabled LLM alternates between:
Verbal reasoning traces: Articulating its current thought process.
Actions: Executing operations in an external environment (e.g., searching Wikipedia, querying an API, or running code).
This allows the model to:
Dynamically plan and adapt: Updating its strategy as new observations come in.
Ground reasoning in reality: Pulling in external knowledge to reduce hallucinations.
Close the loop between thought and action: Much like humans, who reason about what to do, act, observe the outcome, and refine their next steps.
Practical Steps to solve this Room
STEP 1 : Copy the machine ip address and paste it in the new tab.
Now just checkout the webpage of the website like functions etc..
STEP 2 : Go and click the assistant on the right bottom of the website and get started.
STEP 3 : Chat with the LLM Assistant like hello , how are you etc... and check the behaviour and reponse of the LLM Assistant .
STEP 4 : You will see some funtions that shows in the thinking time of the LLM assistant like get_logs , TOKEN_SOCMAS etc..
STEP 5 : Enter this function TOKEN_SOCMAS and hit enter .
Gotcha !!! Congratulations You Got The Flag !!!!!!
If u have any queries guys u can use the contact form to connect with me and ask queries .
If u liked my content
Please Subscribe for all Days Answers of Advent of Cyber 2025
Answer the questions below
What is the flag provided when SOC-mas is restored in the calendar?
THM{XMAS_IS_COMING__BACK}
Connect
Secure your future with expert cybersecurity solutions
Support
Quick Links
© 2025. All rights reserved.
contact@jawstarsec.in
