A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/integrations/tools/anchor_browser below:

Anchor Browser | 🦜️🔗 LangChain

Anchor Browser

Anchor is a platform for AI Agentic browser automation, which solves the challenge of automating workflows for web applications that lack APIs or have limited API coverage. It simplifies the creation, deployment, and management of browser-based automations, transforming complex web interactions into simple API endpoints.

This notebook provides a quick overview for getting started with Anchor Browser tools. For more information of Anchor Browser visit Anchorbrowser.io or the Anchor Browser Docs

Overview Integration details

Anchor Browser package for LangChain is langchain-anchorbrowser, and the current latest version is .

Tool features Tool Name Package Description Parameters AnchorContentTool langchain-anchorbrowser Extract text content from web pages url, format AnchorScreenshotTool langchain-anchorbrowser Take screenshots of web pages url, width, height, image_quality, wait, scroll_all_content, capture_full_height, s3_target_address AnchorWebTaskToolKit langchain-anchorbrowser Perform intelligent web tasks using AI (Simple & Advanced modes) see below

The parameters allowed in langchain-anchorbrowser are only a subset of those listed in the Anchor Browser API reference respectively: Get Webpage Content, Screenshot Webpage, and Perform Web Task.

Info: Anchor currently implements SimpleAnchorWebTaskTool and AdvancedAnchorWebTaskTool tools for langchain with browser_use agent. For

AnchorWebTaskToolKit Tools

The difference between each tool in this toolkit is the pydantic configuration structure.

Tool Name Package Parameters SimpleAnchorWebTaskTool langchain-anchorbrowser prompt, url AdvancedAnchorWebTaskTool langchain-anchorbrowser prompt, url, output_schema Setup

The integration lives in the langchain-anchorbrowser package.

%pip install --quiet -U langchain-anchorbrowser
Credentials

Use your Anchor Browser Credentials. Get them on Anchor Browser API Keys page as needed.

import getpass
import os

if not os.environ.get("ANCHORBROWSER_API_KEY"):
os.environ["ANCHORBROWSER_API_KEY"] = getpass.getpass("ANCHORBROWSER API key:\n")
Instantiation

Instantiace easily Anchor Browser tools instances.

from langchain_anchorbrowser import (
AnchorContentTool,
AnchorScreenshotTool,
AdvancedAnchorWebTaskTool,
)

anchor_content_tool = AnchorContentTool()
anchor_screenshot_tool = AnchorScreenshotTool()
anchor_advanced_web_task_tool = AdvancedAnchorWebTaskTool()
Invocation Invoke directly with args

The full available argument list appear above in the tool features table.


anchor_content_tool.invoke(
{"url": "https://www.anchorbrowser.io", "format": "markdown"}
)


anchor_screenshot_tool.invoke(
{"url": "https://docs.anchorbrowser.io", "width": 1280, "height": 720}
)


anchor_advanced_web_task_tool.invoke(
{
"prompt": "Collect the node names and their CPU average %",
"url": "https://play.grafana.org/a/grafana-k8s-app/navigation/nodes?from=now-1h&to=now&refresh=1m",
"output_schema": {
"nodes_cpu_usage": [
{"node": "string", "cluster": "string", "cpu_avg_percentage": "number"}
]
},
}
)
Invoke with ToolCall

We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:


model_generated_tool_call = {
"args": {"url": "https://www.anchorbrowser.io", "format": "markdown"},
"id": "1",
"name": anchor_content_tool.name,
"type": "tool_call",
}
anchor_content_tool.invoke(model_generated_tool_call)
Chaining

We can use our tool in a chain by first binding it to a tool-calling model and then calling it:

Use within an agent
%pip install -qU langchain langchain-openai
from langchain.chat_models import init_chat_model

llm = init_chat_model(model="gpt-4o", model_provider="openai")
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OPENAI API key:\n")
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain

prompt = ChatPromptTemplate(
[
("system", "You are a helpful assistant."),
("human", "{user_input}"),
("placeholder", "{messages}"),
]
)


llm_with_tools = llm.bind_tools(
[anchor_content_tool], tool_choice=anchor_content_tool.name
)

llm_chain = prompt | llm_with_tools


@chain
def tool_chain(user_input: str, config: RunnableConfig):
input_ = {"user_input": user_input}
ai_msg = llm_chain.invoke(input_, config=config)
tool_msgs = anchor_content_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)


tool_chain.invoke(input())
API reference

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4