customer_email="""
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""style="""American English \
in a calm and respectful tone
"""prompt=f"""Translate the text \
that is delimited by triple backticks
into a style that is {style}.
text: ```{customer_email}```
"""response=get_completion(prompt)
模型输出的response为:
1
"Ah, I'm really frustrated that my blender lid flew off and splattered my kitchen walls with smoothie! And to make matters worse, the warranty doesn't cover the cost of cleaning up my kitchen. I could really use your help right now, friend."
#这里不需要f-string了
template_string="""Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```
"""fromlangchain.promptsimportChatPromptTemplateprompt_template=ChatPromptTemplate.from_template(template_string)print(prompt_template.messages[0].prompt)
1
PromptTemplate(input_variables=['style', 'text'], output_parser=None, partial_variables={}, template='Translate the text that is delimited by triple backticks into a style that is {style}. text: ```{text}```\n', template_format='f-string', validate_template=True)
customer_style="""American English \
in a calm and respectful tone
"""customer_email="""
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""customer_messages=prompt_template.format_messages(style=customer_style,text=customer_email)print(type(customer_messages))print(type(customer_messages[0]))print(customer_messages[0])
1
2
3
<class 'list'>
<class 'langchain.schema.HumanMessage'>
content="Translate the text that is delimited by triple backticks into a style that is American English in a calm and respectful tone\n. text: ```\nArrr, I be fuming that me blender lid flew off and splattered me kitchen walls with smoothie! And to make matters worse, the warranty don't cover the cost of cleaning up me kitchen. I need yer help right now, matey!\n```\n" additional_kwargs={} example=False
1
2
3
# Call the LLM to translate to the style of the customer message
customer_response=chat(customer_messages)print(customer_response.content)
1
Oh man, I'm really frustrated that my blender lid flew off and made a mess of my kitchen walls with smoothie! And on top of that, the warranty doesn't cover the cost of cleaning up my kitchen. I could really use your help right now, buddy!
service_reply="""Hey there customer, \
the warranty does not cover \
cleaning expenses for your kitchen \
because it's your fault that \
you misused your blender \
by forgetting to put the lid on before \
starting the blender. \
Tough luck! See ya!
"""service_style_pirate="""\
a polite tone \
that speaks in English Pirate\
"""service_messages=prompt_template.format_messages(style=service_style_pirate,text=service_reply)service_response=chat(service_messages)print(service_response.content)
1
Ahoy there, valued customer! Regrettably, the warranty be not coverin' the cost o' cleanin' yer galley due to yer own negligence. Ye see, 'twas yer own doin' when ye forgot to secure the lid afore startin' the blender. 'Tis a tough break, indeed! Fare thee well, matey!
customer_review="""\
This leaf blower is pretty amazing. It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wife's \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far I've been the only one using it, and I've been \
using it every other morning to clear the leaves on our lawn. \
It's slightly more expensive than the other leaf blowers \
out there, but I think it's worth it for the extra features.
"""review_template="""\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
Format the output as JSON with the following keys:
gift
delivery_days
price_value
text: {text}
"""
fromlangchain.output_parsersimportResponseSchemafromlangchain.output_parsersimportStructuredOutputParsergift_schema=ResponseSchema(name="gift",description="Was the item purchased\
as a gift for someone else? \
Answer True if yes,\
False if not or unknown.")delivery_days_schema=ResponseSchema(name="delivery_days",description="How many days\
did it take for the product\
to arrive? If this \
information is not found,\
output -1.")price_value_schema=ResponseSchema(name="price_value",description="Extract any\
sentences about the value or \
price, and output them as a \
comma separated Python list.")response_schemas=[gift_schema,delivery_days_schema,price_value_schema]output_parser=StructuredOutputParser.from_response_schemas(response_schemas)format_instructions=output_parser.get_format_instructions()print(format_instructions)
1
2
3
4
5
6
7
8
9
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "\`\`\`json" and "\`\`\`":
```json
{
"gift": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.
"delivery_days": string // How many days did it take for the product to arrive? If this information is not found, output -1.
"price_value": string // Extract any sentences about the value or price, and output them as a comma separated Python list.
}
```
review_template_2="""\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product\
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
text: {text}
{format_instructions}
"""prompt=ChatPromptTemplate.from_template(template=review_template_2)messages=prompt.format_messages(text=customer_review,format_instructions=format_instructions)#此处可以指定format_instructions
response=chat(messages)print(response.content)
1
2
3
4
5
6
7
```json
{
"gift": true,
"delivery_days": 2,
"price_value": ["It's slightly more expensive than the other leaf blowers out there, but I think it's worth it for the extra features."]
}
```
{'gift': True,
'delivery_days': 2,
'price_value': ["It's slightly more expensive than the other leaf blowers out there, but I think it's worth it for the extra features."]}
dict
conversation.predict(input="Hi, my name is Andrew")
1
2
3
4
5
6
7
8
9
10
11
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, my name is Andrew
AI:
> Finished chain.
"Hello Andrew! It's nice to meet you. How can I assist you today?"
1
conversation.predict(input="What is 1+1?")
1
2
3
4
5
6
7
8
9
10
11
12
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. How can I assist you today?
Human: What is 1+1?
AI:
> Finished chain.
'1+1 equals 2. Is there anything else you would like to know?'
1
conversation.predict(input="What is my name?")
1
2
3
4
5
6
7
8
9
10
11
12
13
14
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. How can I assist you today?
Human: What is 1+1?
AI: 1+1 equals 2. Is there anything else you would like to know?
Human: What is my name?
AI:
> Finished chain.
'Your name is Andrew.'
可以查看之前的对话内容:
1
print(memory.buffer)
1
2
3
4
5
6
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. How can I assist you today?
Human: What is 1+1?
AI: 1+1 equals 2. Is there anything else you would like to know?
Human: What is my name?
AI: Your name is Andrew.
1
memory.load_memory_variables({})
1
{'history': "Human: Hi, my name is Andrew\nAI: Hello Andrew! It's nice to meet you. How can I assist you today?\nHuman: What is 1+1?\nAI: 1+1 equals 2. Is there anything else you would like to know?\nHuman: What is my name?\nAI: Your name is Andrew."}
fromlangchain.memoryimportConversationBufferWindowMemorymemory=ConversationBufferWindowMemory(k=1)memory.save_context({"input":"Hi"},{"output":"What's up"})memory.save_context({"input":"Not much, just hanging"},{"output":"Cool"})memory.load_memory_variables({})
1
{'history': 'Human: Not much, just hanging\nAI: Cool'}
conversation.predict(input="Hi, my name is Andrew")
1
"Hello Andrew! It's nice to meet you. How can I assist you today?"
1
conversation.predict(input="What is 1+1?")
1
'1+1 equals 2. Is there anything else you would like to know?'
1
conversation.predict(input="What is my name?")
1
"I'm sorry, I do not have access to personal information such as your name. Is there anything else you would like to know?"
除了限制保存的对话轮数,也可以直接限制memory中保存的token数量:
1
2
3
4
5
6
7
8
9
10
11
12
13
fromlangchain.memoryimportConversationTokenBufferMemoryfromlangchain.llmsimportOpenAIllm=ChatOpenAI(temperature=0.0,model=llm_model)memory=ConversationTokenBufferMemory(llm=llm,max_token_limit=50)memory.save_context({"input":"AI is what?!"},{"output":"Amazing!"})memory.save_context({"input":"Backpropagation is what?"},{"output":"Beautiful!"})memory.save_context({"input":"Chatbots are what?"},{"output":"Charming!"})memory.load_memory_variables({})
1
{'history': 'AI: Amazing!\nHuman: Backpropagation is what?\nAI: Beautiful!\nHuman: Chatbots are what?\nAI: Charming!'}
fromlangchain.memoryimportConversationSummaryBufferMemory# create a long string
schedule="There is a meeting at 8am with your product team. \
You will need your powerpoint presentation prepared. \
9am-12pm have time to work on your LangChain \
project which will go quickly because Langchain is such a powerful tool. \
At Noon, lunch at the italian resturant with a customer who is driving \
from over an hour away to meet you to understand the latest in AI. \
Be sure to bring your laptop to show the latest LLM demo."memory=ConversationSummaryBufferMemory(llm=llm,max_token_limit=100)memory.save_context({"input":"Hello"},{"output":"What's up"})memory.save_context({"input":"Not much, just hanging"},{"output":"Cool"})memory.save_context({"input":"What is on the schedule today?"},{"output":f"{schedule}"})memory.load_memory_variables({})
1
{'history': 'System: The human and AI exchange greetings and discuss the schedule for the day, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI provides details on each event and emphasizes the power of LangChain as a tool.'}
conversation=ConversationChain(llm=llm,memory=memory,verbose=True)conversation.predict(input="What would be a good demo to show?")
1
2
3
4
5
6
7
8
9
10
11
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
System: The human and AI exchange greetings and discuss the schedule for the day, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI provides details on each event and emphasizes the power of LangChain as a tool.
Human: What would be a good demo to show?
AI:
> Finished chain.
'For the meeting with the product team, a demo showcasing the latest features and updates on the LangChain project would be ideal. This could include a live demonstration of how LangChain streamlines language translation processes, improves accuracy, and increases efficiency. Additionally, highlighting any recent success stories or case studies would be beneficial to showcase the real-world impact of LangChain.'
fromlangchain.chat_modelsimportChatOpenAIfromlangchain.promptsimportChatPromptTemplatefromlangchain.chainsimportLLMChainllm=ChatOpenAI(temperature=0.9,model=llm_model)prompt=ChatPromptTemplate.from_template("What is the best name to describe \
a company that makes {product}?")
fromlangchain.chainsimportSimpleSequentialChainllm=ChatOpenAI(temperature=0.9,model=llm_model)# prompt template 1
first_prompt=ChatPromptTemplate.from_template("What is the best name to describe \
a company that makes {product}?")# Chain 1
chain_one=LLMChain(llm=llm,prompt=first_prompt)#第一个子chain
# prompt template 2
second_prompt=ChatPromptTemplate.from_template("Write a 20 words description for the following \
company:{company_name}")# chain 2
chain_two=LLMChain(llm=llm,prompt=second_prompt)#第二个子chain
overall_simple_chain=SimpleSequentialChain(chains=[chain_one,chain_two],verbose=True)overall_simple_chain.run(product)
fromlangchain.chainsimportSequentialChainllm=ChatOpenAI(temperature=0.9,model=llm_model)#定义第一个chain
# prompt template 1: translate to english
first_prompt=ChatPromptTemplate.from_template("Translate the following review to english:""\n\n{Review}")# chain 1: input= Review and output= English_Review
chain_one=LLMChain(llm=llm,prompt=first_prompt,output_key="English_Review")#定义第二个chain
second_prompt=ChatPromptTemplate.from_template("Can you summarize the following review in 1 sentence:""\n\n{English_Review}")# chain 2: input= English_Review and output= summary
chain_two=LLMChain(llm=llm,prompt=second_prompt,output_key="summary")#chain2的输入来自chain1的输出
#定义第三个chain
# prompt template 3: translate to english
third_prompt=ChatPromptTemplate.from_template("What language is the following review:\n\n{Review}")# chain 3: input= Review and output= language
chain_three=LLMChain(llm=llm,prompt=third_prompt,output_key="language")#定义第四个chain
# prompt template 4: follow up message
fourth_prompt=ChatPromptTemplate.from_template("Write a follow up response to the following ""summary in the specified language:""\n\nSummary: {summary}\n\nLanguage: {language}")# chain 4: input= summary, language and output= followup_message
chain_four=LLMChain(llm=llm,prompt=fourth_prompt,output_key="followup_message")#chain4的输入来自chain2和chain3的输出
#定义SequentialChain
# overall_chain: input= Review
# and output= English_Review,summary, followup_message
overall_chain=SequentialChain(chains=[chain_one,chain_two,chain_three,chain_four],input_variables=["Review"],output_variables=["English_Review","summary","followup_message"],verbose=True)review=df.Review[5]overall_chain(review)
physics_template="""You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.
Here is a question:
{input}"""math_template="""You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts,
answer the component parts, and then put them together\
to answer the broader question.
Here is a question:
{input}"""history_template="""You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.
Here is a question:
{input}"""computerscience_template=""" You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity.
Here is a question:
{input}"""
prompt_infos=[{"name":"physics","description":"Good for answering questions about physics","prompt_template":physics_template},{"name":"math","description":"Good for answering math questions","prompt_template":math_template},{"name":"History","description":"Good for answering history questions","prompt_template":history_template},{"name":"computer science","description":"Good for answering computer science questions","prompt_template":computerscience_template}]
#用于router chain的提示词模板
#输入:用户提供
#输出:JSON格式,有两个key:destination和next_inputs
#第一个key:destination的含义:
#让模型根据用户输入自行选择destination_chains中合适的子chain执行
#如果没有合适的子chain,则执行default_chain
#destination可以是physics、math、History、computer science、DEFAULT
#第二个key:next_inputs的含义:
#如果模型认为修改用户输入可以得到更好的结果,则模型可以修改输入,并输出在next_inputs中
MULTI_PROMPT_ROUTER_TEMPLATE="""Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{
"destination": string \ "DEFAULT" or name of the prompt to use in {destinations}
"next_inputs": string \ a potentially modified version of the original input
}
```
REMEMBER: The value of “destination” MUST match one of \
the candidate prompts listed below.\
If “destination” does not fit any of the specified prompts, set it to “DEFAULT.”
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
<< OUTPUT (remember to include the ```json)>>"""#定义router chain的提示词模板
router_template=MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)router_prompt=PromptTemplate(template=router_template,input_variables=["input"],output_parser=RouterOutputParser(),)router_chain=LLMRouterChain.from_llm(llm,router_prompt)#定义router chain
fromlangchain.indexesimportVectorstoreIndexCreatorindex=VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch).from_loaders([loader])query="Please list all your shirts with sun protection \
in a table in markdown and summarize each one."llm_replacement_model=OpenAI(temperature=0,model='gpt-3.5-turbo-instruct')response=index.query(query,llm=llm_replacement_model)display(Markdown(response))
fromlangchain.embeddingsimportOpenAIEmbeddingsembeddings=OpenAIEmbeddings()embed=embeddings.embed_query("Hi my name is Harrison")print(len(embed))print(embed[:5])
query="Please suggest a shirt with sunblocking"docs=db.similarity_search(query)print(len(docs))print(docs[0])
1
2
4
Document(page_content=': 255\nname: Sun Shield Shirt by\ndescription: "Block the sun, not the fun – our high-performance sun shirt is guaranteed to protect from harmful UV rays. \n\nSize & Fit: Slightly Fitted: Softly shapes the body. Falls at hip.\n\nFabric & Care: 78% nylon, 22% Lycra Xtra Life fiber. UPF 50+ rated – the highest rated sun protection possible. Handwash, line dry.\n\nAdditional Features: Wicks moisture for quick-drying comfort. Fits comfortably over your favorite swimsuit. Abrasion resistant for season after season of wear. Imported.\n\nSun Protection That Won\'t Wear Off\nOur high-performance fabric provides SPF 50+ sun protection, blocking 98% of the sun\'s harmful rays. This fabric is recommended by The Skin Cancer Foundation as an effective UV protectant.', metadata={'source': 'OutdoorClothingCatalog_1000.csv', 'row': 255})
可以看到,一共找到了4个最相似的文本片段。我们将这4个文本片段连接在一起后交给LLM去总结:
1
2
3
4
5
6
#尝试使用gpt-3.5-turbo发现并不能输出markdown table格式,所以换用了gpt-4
llm=ChatOpenAI(temperature=0.0,model="gpt-4")qdocs="".join([docs[i].page_contentforiinrange(len(docs))])response=llm.call_as_llm(f"{qdocs} Question: Please list all your \
shirts with sun protection in a table in markdown and summarize each one.")display(Markdown(response))
query="Please list all your shirts with sun protection in a table \
in markdown and summarize each one."response=qa_stuff.run(query)display(Markdown(response))
examples=[{"query":"Do the Cozy Comfort Pullover Set\
have side pockets?","answer":"Yes"},{"query":"What collection is the Ultra-Lofty \
850 Stretch Down Hooded Jacket from?","answer":"The DownTek collection"}]
[chain/start] [1:chain:RetrievalQA] Entering Chain run with input:
{
"query": "Do the Cozy Comfort Pullover Set have side pockets?"
}
[chain/start] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain > 3:chain:LLMChain] Entering Chain run with input:
{
"question": "Do the Cozy Comfort Pullover Set have side pockets?",
"context": ": 10\nname: Cozy Comfort Pullover Set, Stripe\ndescription: Perfect for lounging, this striped knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out.\n\nSize & Fit\n- Pants are Favorite Fit: Sits lower on the waist.\n- Relaxed Fit: Our most generous fit sits farthest from the body.\n\nFabric & Care\n- In the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features\n- Relaxed fit top with raglan sleeves and rounded hem.\n- Pull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg.\n\nImported.<<<<>>>>>: 73\nname: Cozy Cuddles Knit Pullover Set\ndescription: Perfect for lounging, this knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out. \n\nSize & Fit \nPants are Favorite Fit: Sits lower on the waist. \nRelaxed Fit: Our most generous fit sits farthest from the body. \n\nFabric & Care \nIn the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features \nRelaxed fit top with raglan sleeves and rounded hem. \nPull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg. \nImported.<<<<>>>>>: 632\nname: Cozy Comfort Fleece Pullover\ndescription: The ultimate sweater fleece \u2013 made from superior fabric and offered at an unbeatable price. \n\nSize & Fit\nSlightly Fitted: Softly shapes the body. Falls at hip. \n\nWhy We Love It\nOur customers (and employees) love the rugged construction and heritage-inspired styling of our popular Sweater Fleece Pullover and wear it for absolutely everything. From high-intensity activities to everyday tasks, you'll find yourself reaching for it every time.\n\nFabric & Care\nRugged sweater-knit exterior and soft brushed interior for exceptional warmth and comfort. Made from soft, 100% polyester. Machine wash and dry.\n\nAdditional Features\nFeatures our classic Mount Katahdin logo. Snap placket. Front princess seams create a feminine shape. Kangaroo handwarmer pockets. Cuffs and hem reinforced with jersey binding. Imported.\n\n \u2013 Official Supplier to the U.S. Ski Team\nTHEIR WILL TO WIN, WOVEN RIGHT IN. LEARN MORE<<<<>>>>>: 151\nname: Cozy Quilted Sweatshirt\ndescription: Our sweatshirt is an instant classic with its great quilted texture and versatile weight that easily transitions between seasons. With a traditional fit that is relaxed through the chest, sleeve, and waist, this pullover is lightweight enough to be worn most months of the year. The cotton blend fabric is super soft and comfortable, making it the perfect casual layer. To make dressing easy, this sweatshirt also features a snap placket and a heritage-inspired Mt. Katahdin logo patch. For care, machine wash and dry. Imported."
}
[llm/start] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain > 3:chain:LLMChain > 4:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n: 10\nname: Cozy Comfort Pullover Set, Stripe\ndescription: Perfect for lounging, this striped knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out.\n\nSize & Fit\n- Pants are Favorite Fit: Sits lower on the waist.\n- Relaxed Fit: Our most generous fit sits farthest from the body.\n\nFabric & Care\n- In the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features\n- Relaxed fit top with raglan sleeves and rounded hem.\n- Pull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg.\n\nImported.<<<<>>>>>: 73\nname: Cozy Cuddles Knit Pullover Set\ndescription: Perfect for lounging, this knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out. \n\nSize & Fit \nPants are Favorite Fit: Sits lower on the waist. \nRelaxed Fit: Our most generous fit sits farthest from the body. \n\nFabric & Care \nIn the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features \nRelaxed fit top with raglan sleeves and rounded hem. \nPull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg. \nImported.<<<<>>>>>: 632\nname: Cozy Comfort Fleece Pullover\ndescription: The ultimate sweater fleece \u2013 made from superior fabric and offered at an unbeatable price. \n\nSize & Fit\nSlightly Fitted: Softly shapes the body. Falls at hip. \n\nWhy We Love It\nOur customers (and employees) love the rugged construction and heritage-inspired styling of our popular Sweater Fleece Pullover and wear it for absolutely everything. From high-intensity activities to everyday tasks, you'll find yourself reaching for it every time.\n\nFabric & Care\nRugged sweater-knit exterior and soft brushed interior for exceptional warmth and comfort. Made from soft, 100% polyester. Machine wash and dry.\n\nAdditional Features\nFeatures our classic Mount Katahdin logo. Snap placket. Front princess seams create a feminine shape. Kangaroo handwarmer pockets. Cuffs and hem reinforced with jersey binding. Imported.\n\n \u2013 Official Supplier to the U.S. Ski Team\nTHEIR WILL TO WIN, WOVEN RIGHT IN. LEARN MORE<<<<>>>>>: 151\nname: Cozy Quilted Sweatshirt\ndescription: Our sweatshirt is an instant classic with its great quilted texture and versatile weight that easily transitions between seasons. With a traditional fit that is relaxed through the chest, sleeve, and waist, this pullover is lightweight enough to be worn most months of the year. The cotton blend fabric is super soft and comfortable, making it the perfect casual layer. To make dressing easy, this sweatshirt also features a snap placket and a heritage-inspired Mt. Katahdin logo patch. For care, machine wash and dry. Imported.\nHuman: Do the Cozy Comfort Pullover Set have side pockets?"
]
}
[llm/end] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain > 3:chain:LLMChain > 4:llm:ChatOpenAI] [17.971ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "Yes, the Cozy Comfort Pullover Set does have side pockets.",
"generation_info": null,
"message": {
"content": "Yes, the Cozy Comfort Pullover Set does have side pockets.",
"additional_kwargs": {},
"example": false
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 732,
"completion_tokens": 14,
"total_tokens": 746
},
"model_name": "gpt-4"
}
}
[chain/end] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain > 3:chain:LLMChain] [18.450999999999997ms] Exiting Chain run with output:
{
"text": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
[chain/end] [1:chain:RetrievalQA > 2:chain:StuffDocumentsChain] [18.881999999999998ms] Exiting Chain run with output:
{
"output_text": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
[chain/end] [1:chain:RetrievalQA] [65.87700000000001ms] Exiting Chain run with output:
{
"result": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
'Yes, the Cozy Comfort Pullover Set does have side pockets.'
Example 0:
Question: Do the Cozy Comfort Pullover Set have side pockets?
Real Answer: Yes
Predicted Answer: Yes, the Cozy Comfort Pullover Set does have side pockets.
Predicted Grade: CORRECT
Example 1:
Question: What collection is the Ultra-Lofty 850 Stretch Down Hooded Jacket from?
Real Answer: The DownTek collection
Predicted Answer: The Ultra-Lofty 850 Stretch Down Hooded Jacket is from the DownTek collection.
Predicted Grade: CORRECT
Example 2:
Question: What is the weight of each pair of Women's Campside Oxfords?
Real Answer: The approximate weight of each pair of Women's Campside Oxfords is 1 lb. 1 oz.
Predicted Answer: The approximate weight of each pair of Women's Campside Oxfords is 1 lb. 1 oz.
Predicted Grade: CORRECT
Example 3:
Question: What are the dimensions of the small and medium sizes for the Recycled Waterhog Dog Mat, Chevron Weave?
Real Answer: The small size has dimensions of 18" x 28" and the medium size has dimensions of 22.5" x 34.5".
Predicted Answer: The small Recycled Waterhog Dog Mat, Chevron Weave has dimensions of 18" x 28". The medium size has dimensions of 22.5" x 34.5".
Predicted Grade: CORRECT
Example 4:
Question: What are some key features of the Infant and Toddler Girls' Coastal Chill Swimsuit, Two-Piece as described in the document?
Real Answer: The key features of the Infant and Toddler Girls' Coastal Chill Swimsuit, Two-Piece include bright colors, ruffles, exclusive whimsical prints, four-way-stretch and chlorine-resistant fabric, UPF 50+ rated fabric for sun protection, crossover no-slip straps, fully lined bottom for secure fit and maximum coverage.
Predicted Answer: The Infant and Toddler Girls' Coastal Chill Swimsuit, Two-Piece has several key features. It has bright colors, ruffles, and exclusive whimsical prints. The fabric is four-way-stretch and chlorine-resistant, which helps it keep its shape and resist snags. The fabric is also UPF 50+ rated, providing the highest rated sun protection possible and blocking 98% of the sun's harmful rays. The swimsuit has crossover no-slip straps and a fully lined bottom for a secure fit and maximum coverage. It is recommended to machine wash and line dry the swimsuit for best results.
Predicted Grade: CORRECT
Example 5:
Question: What is the fabric composition of the Refresh Swimwear V-Neck Tankini Contrasts?
Real Answer: The body of the tankini top is made of 82% recycled nylon and 18% Lycra® spandex, while the lining is made of 90% recycled nylon and 10% Lycra® spandex.
Predicted Answer: The Refresh Swimwear, V-Neck Tankini Contrasts is made of 82% recycled nylon with 18% Lycra® spandex. It is lined with 90% recycled nylon and 10% Lycra® spandex.
Predicted Grade: CORRECT
Example 6:
Question: What technology sets the EcoFlex 3L Storm Pants apart from other waterproof pants?
Real Answer: The EcoFlex 3L Storm Pants feature TEK O2 technology, which offers the most breathability ever tested in waterproof pants.
Predicted Answer: The EcoFlex 3L Storm Pants are set apart by the TEK O2 technology. This state-of-the-art technology offers the most breathability ever tested, making the pants suitable for a variety of outdoor activities year-round. The pants are also loaded with features outdoor enthusiasts appreciate, including weather-blocking gaiters and handy side zips.
Predicted Grade: CORRECT
参数agent用于指定Agent的类型,最常用的一种类型就是CHAT_ZERO_SHOT_REACT_DESCRIPTION,其中,CHAT的含义是使用chat类模型(比如GPT-3.5,GPT4等,支持多轮对话格式);ZERO_SHOT的含义是不需要提供示例,模型可以根据工具的描述来自主决定使用哪个工具;REACT表示采用ReAct框架:Reasoning+Action;DESCRIPTION表示每个工具的描述会成为LLM判断是否调用的重要依据,比如工具"llm-math"的描述是”Useful for when you need to answer questions about math.”,工具"wikipedia"的描述是”A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.”。
question="Tom M. Mitchell is an American computer scientist \
and the Founders University Professor at Carnegie Mellon University (CMU)\
what book did he write?"result=agent(question)
agent=create_python_agent(llm,tool=PythonREPLTool(),verbose=True)customer_list=[["Harrison","Chase"],["Lang","Chain"],["Dolly","Too"],["Elle","Elem"],["Geoff","Fusion"],["Trance","Former"],["Jen","Ayai"]]agent.run(f"""Please use Python code (executed via the python_repl tool) to sort the following list of customers by last name, then first name, and print the result: {customer_list}""")
fromlangchain.agentsimporttoolfromdatetimeimportdate@tooldeftime(text:str)->str:"""Returns todays date, use this for any \
questions related to knowing todays date. \
The input should always be an empty string, \
and this function will always return todays \
date - any date mathmatics should occur \
outside this function."""returnstr(date.today())agent=initialize_agent(tools+[time],llm,agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,handle_parsing_errors=True,verbose=True)try:result=agent("whats the date today?")except:print("exception on external access")