Communication between an AllegroGraph server and other processes happens through HTTP. This document describes the HTTP entry points the server provides, and how to use them. Most people will prefer to use a client library written in their language of choice to communicate with the server, but when no such library exists, or there is a reason not to use one, working directly with HTTP is an option.
The protocol described here is compatible (is a superset of) with the Sesame 2.0 HTTP protocol and the W3C SPARQL protocol (SPARQL endpoint).
OverviewAn AllegroGraph server exposes one or more catalogs, each containing any number of repositories (triple stores). The catalog layout of a server, as well as the port on which it listens for HTTP connections, is defined through its configuration file.
When the server is running, for example on the default port 10035, a catalog named public
would be available under http://localhost:10035/catalogs/public
, whereas the root catalog lives at http://localhost:10035/catalogs/root
, which can be shortened to http://localhost:10035/
since catalogs/root
is the default. Repository people
in root catalog would thus be accessible at http://locahost:10035/repositories/people
, while repository data
in public
catalog would be accessible at http://localhost:10035/catalogs/public/repositories/data
. Opening these URLs in a browser will present you with WebView, a web-interface to the catalogs and repositories (and the server in general), while repository endpoint's query
parameter can be used to execute SPARQL queries:
http://localhost:10035/repositories/people?query=<SPARQL query>
To manipulate or inspect the repositories directly, more specific URLs have to be constructed. For example, requesting this URL tells you how many statements (triples) there are in the repository people
:
http://localhost:10035/repositories/people/size
HTTP vs HTTPS
The scheme used by AllegroGraph is either HTTP or HTTPS. All the examples in this document and the related HTTP reference document use HTTP but HTTPS can be used when the port is an SSL port. There are configuration directives listed in the Server Configuration and Control document which control which scheme is associated with which port. The three most relevant directives are:
Port: the port for HTTP interaction. If unspecified, defaults to 10035.
AllowHTTP: a boolean (values yes
or no
) which controls whether HTTP communication is allowed. If yes
(the default), HTTP can be used and communication uses the value of the Port
directive described above. If no
, then there must be a value for SSLPort and only HTTPS communication is allowed. Note that both HTTP and HTTPS communication can be allowed, using Port and SSLPort respectively if AllowHTTP
is yes
and a value of SSLPort is specified.
SSLPort: The port for HTTPS communication. If this directive is specified, additional directives are required, such as, at a minimum, SSLCertificate
. See the section Top-level directives for SSL client certificate authentication in the Server Configuration and Control document for more information.
The AllegroGraph server tries to make effective use of HTTP features such as content-negotiation, status codes, and request methods. As such, a good understanding of the ideas behind HTTP will help when working with this protocol.
Input conventionsUnless noted otherwise, the services that take parameters expect these to be appended to the URL as a query string â as in ?name=John%20Doe
. For POST
requests that send along a Content-Type
header of application/x-www-form-urlencoded
, the query string can also be put in the request body. Any 'dynamic' part of a URL (for example the repository name) should also be URL-encoded â specifically, slashes should be encoded as %2F
.
When making a PUT
or POST
request that includes a body, it is usually required to specify a meaningful Content-Type
for this body.
When RDF terms have to be passed as parameters, or included in JSON-encoded data, these are written in a format resembling N-triples, with the caveat that non-ASCII characters are allowed. Examples of valid terms are <http://example.com/foo>
, "literal value"
, "55"^^<http://www.w3.org/2001/XMLSchema#integer>
.
Any boolean values passed as parameters should be represented by the strings "true" or "false". Also accepted as equivalents for "true" are "t", "y", "yes" and "1" and as equivalents for "false" are "nil", "n", "no" and "0". Other values signal an error:
curl -u test:xyzzy -X POST \
http://127.1:10035/repositories/r/indices/optimize?wait=nope
INVALID PARAMETERS: 'nope' is not a valid boolean.
All non-ASCII input to the server should be encoded using UTF-8. When such characters end up a URL, the individual bytes of their UTF-8 representation should be taken and escaped as normal (%HH
).
If there is the need to send data to the server in compressed form, a Content-Encoding header can be provided along with an encoded request body. The server understands deflate
and gzip
encoding on all platforms, and bzip2
on platforms that provide the bzcat
utility (most Unixes).
When making a request without an Authorization
header, the request is treated as coming from user anonymous
(if such a user exists). Either Basic HTTP authentication or a certificate signed by a CA that the server accepts must be used to identify oneself as another user.
The server always looks at the Accept header of a request, and tries to generate a response in the format that the client asks for. If this fails, a 406 response is returned. When no Accept
, or an Accept
of */*
is specified, the server prefers text/plain
, in order to make it easy to explore the interface from a web browser.
Almost every service is capable of returning text/plain
(just text) and application/json
(JSON-encoded) responses. Services returning sets of triples can return the following:
application/rdf+xml
(RDF/XML)text/plain
(N-triples)text/x-nquads
(N-quads)application/trix
(TriX)text/rdf+n3
(N3)text/integer
(return only a result count)application/x-binary-rdf-results-table
(RDF-star capable binary format used by RDF4J)application/json
, application/x-quints+json
(see below)When encoding RDF triple sets as JSON, arrays of strings are used. The strings contain terms in a format that is basically N-triples with non-ASCII characters allowed (the server always uses UTF-8 encoding). Arrays of three elements are triples (subject, predicate, object) in the default graph. Arrays of four elements also contain a graph name (context) as the fourth element. The application/x-quints+json
format works like the JSON triple format, but adds an extra field, the triple ID, to the front of every triple.
Services returning lists or sets of results (SPARQL select
queries, for example) support the application/sparql-results+xml
(see here) and application/sparql-results+json
(see here) format. When encoded as regular JSON (application/json
), such results take the form of an object with two properties. names
contains an array of column labels, and values
an array of arrays of values â the result rows. Again, text/integer
will cause only the number of results to be returned.
When asked to (using the Accept-Encoding header), the server will compress the response body using 'gzip' or 'deflate'. For typical RDF data, this will make the response a lot smaller.
Error responsesError responses will choose the appropriate HTTP status code as much as possible. Since there are a lot of circumstances that result in 400 Bad Request
, those are (usually) tagged with an additional error code. This code will be a capitalized string at the start of the body, followed by a colon, a space, and then a more detailed description of the problem.
The above paragraph says 'usually', since in the case of really malformed requests, the HTTP server implementation underlying the AllegroGraph server can also return a 400
response, which won't include such an error code. Thus, clients shouldn't recklessly assume the code is present, but could, for example, match against the regular expression /^([A-Z ]+): (.*)$/
to separate the code from the message.
Error codes will be one of the following:
INVALID PARAMETERS
MALFORMED QUERY
MALFORMED DATA
MALFORMED PROGRAM
UNSUPPORTED QUERY LANGUAGE
UNSUPPORTED FILE FORMAT
INAPPROPRIATE REQUEST
PRECONDITION FAILED
NOT IMPLEMENTED
COMMIT FAILED
CORS (Cross-Origin Resource Sharing), if enabled, allows scripts run on a web page on some other web server than AllegroGraph's web server. CORS is not enabled by default because if not done properly, it can introduce security holes. A general tutorial on CORS is available at http://www.html5rocks.com/en/tutorials/cors/. CORS is enabled with top-level configuration directives, which go in the AllegroGraph configuration file. AllegroGraph must be restarted for changes to that file to take effect. The CORS directives are described here in the Server Configuration and Control document.
Example sessionWhat follows is an example session. For clarity, all headers related to authorization, caching, connection keep-alive, and content chunking have been left out.
First, to create a repository named test
, we'd issue the following request:
PUT /repositories/test HTTP/1.1
Accept: */*
To which the server responds:
HTTP/1.1 204 No Content
We can now list the repositories in the root catalog, asking for JSON content:
GET /repositories HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
[{"uri": "<http://localhost:10035/repositories/test>",
"id": "\"test\"",
"title": "\"test\"",
"readable": true
"writeable": true}]
Next, let's add a few statements to this store...
POST /repositories/test/statements HTTP/1.1
Accept: */*
Content-Type: application/json
[["<http://example.org#alice>", "<http://example.org#name>", "\"Alice\""],
["<http://example.org#bob>", "<http://example.org#name>", "\"Bob\""]]
HTTP/1.1 204 No Content
Or, doing the same using N-triples instead of JSON:
POST /repositories/test/statements HTTP/1.1
Accept: */*
Content-Type: text/plain
<http://example.org#alice> <http://example.org#age> "26" .
<http://example.org#bob> <http://example.org#age> "33" .
To find out Alice's name and age, we could issue the following SPARQL query:
select ?n ?age {
<http://example.org#alice> <http://example.org#name> ?n ;
<http://example.org#age ?age
}
In the following request, [QUERY]
is the URL-encoded equivalent of this query:
GET /repositories/test?query=[QUERY] HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{"names":["n", "age"], "values":[["\"alice\"", "\"26\""]]}
Or, asking for SPARQL XML results:
GET /repositories/test?query=[QUERY] HTTP/1.1
Accept: application/sparql-results+xml
HTTP/1.1 200 OK
Content-Type: application/sparql-results+xml; charset=UTF-8
<?xml version="1.0"?>
<sparql xmlns="http://www.w3.org/2005/sparql-results#">
<head><variable name="n"/><variable name="age"/></head>
<results>
<result>
<binding name="n"><literal>alice</literal></binding>
<binding name="age"><literal>26</literal></binding>
</result>
</results>
</sparql>
To fetch all statements in a repository, issue a request like this:
GET /repositories/test/statements
Accept: text/plain
HTTP/1.1 200 OK
Content-Type text/plain
<http://example.org#alice> <http://example.org#name> "Alice" .
<http://example.org#bob> <http://example.org#name> "Bob" .
<http://example.org#alice> <http://example.org#age> "26" .
<http://example.org#bob> <http://example.org#age> "33" .
And finally, if we submit a nonsense query, we get:
GET /repositories/test?query=hello&queryLn=english HTTP/1.1
Accept: */*
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=UTF-8
UNSUPPORTED QUERY LANGUAGE: Unsupported query language: 'english'
URL summary
The following overview gives a summary of a subset of the URLs exposed by the server, and the methods allowed on them. Each method links to a description of the functionality exposed. See HTTP reference for a complete list of supported services.
Under a catalog prefix (/
for the root catalog, /catalogs/[name]/
for named catalogs), the following services are available:
Returns entries from the audit log which match the given parameters.
This service takes the following parameters:
startDate
xsd:dateTime
format. For example, "2014-02-23T00:00:00Z"
. This can be used to select only events which occurred on or after this date and time. See the date-time section of Datatypes for information on the date-time format.
endDate
xsd:dateTime
format. For example, "2014-02-23T00:00:00Z"
. This can be used to select only events which occurred on or before this date and time. See the date-time section of Datatypes for information on the date-time format.
users
UNKNOWN
. Only events triggered by the specified users are included in the results. UNKNOWN
matches entries for which the acting user is not recorded which includes some automatic system actions and failed logins.
events
limit
offset
offset
results in result set.
If a parameter is not specified then no restriction is imposed on the corresponding component of the audit log entires.
The columns of results are:
eventName
eventTime
"2014-02-23T00:00:01Z"
( equals February 23, 2014, one second after midnight, Greenwich Mean Time). See the date-time section of Datatypes for information on the date-time format.
user
remote
192.168.1.1:43420
.
databaseName
<catalog-name>/<repository-name>
. For stores in the root catalog, the <catalog-name>
is root
.
target
addIndex
and dropIndex
events, this is the name of the triple index. For addUser
, changePassword
, deleteUser
and failedLogin
events, this is the user id of the affected user.
The results are sorted by eventTime
. The format of the results can be controlled with the Accept header.
Auditing is described in Auditing. The information above is repeated in that document.
GET /auditLog/eventTypesReturns the event classes and their labels. Classes can be passed in the events
parameter of auditLog to restrict the types of events in the results.
The format of the results can be controlled with the Accept header.
Auditing is described in Auditing. The information above is repeated in that document.
HTTP audit interface exampleHere is an example where we use curl to get audit information.
curl -u test:xyzzy -X GET http://localhost:10035/auditLog
The information returned uses the application/sparql-results+xml
format which is the default for SPARQL results (audit requests are implemented internally as SPARQL queries). Note that some events have no user information as the user is not known for them.
$ curl -u test:xyzzy -X GET http://localhost:10035/auditLog
<?xml version="1.0"?>
<!-- Generated by AllegroGraph 4.14 -->
<sparql xmlns="http://www.w3.org/2005/sparql-results#">
<head>
<variable name="eventName"/>
<variable name="eventTime"/>
<variable name="user"/>
<variable name="remote"/>
<variable name="databaseName"/>
<variable name="target"/>
</head>
<results>
<result>
<binding name="eventName">
<literal>add user</literal>
</binding>
<binding name="eventTime">
<literal datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-03-24T08:05:44Z</literal>
</binding>
<binding name="target">
<literal>test</literal>
</binding>
</result>
<result>
<binding name="eventName">
<literal>create triple-store</literal>
</binding>
<binding name="eventTime">
<literal datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-03-24T08:05:45Z</literal>
</binding>
<binding name="databaseName">
<literal>system/system</literal>
</binding>
</result>
<result>
<binding name="eventName">
<literal>create triple-store</literal>
</binding>
<binding name="eventTime">
<literal datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-03-24T08:05:52Z</literal>
</binding>
<binding name="user">
<literal>test</literal>
</binding>
<binding name="databaseName">
<literal>root/xxx</literal>
</binding>
</result>
<result>
<binding name="eventName">
<literal>add index</literal>
</binding>
<binding name="eventTime">
<literal datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-03-24T08:06:36Z</literal>
</binding>
<binding name="user">
<literal>test</literal>
</binding>
<binding name="databaseName">
<literal>root/xxx</literal>
</binding>
<binding name="target">
<uri>http://franz.com/allegrograph/4.11/audit-log#i</uri>
</binding>
</result>
<result>
<binding name="eventName">
<literal>create freetext index</literal>
</binding>
<binding name="eventTime">
<literal datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-03-24T08:06:36Z</literal>
</binding>
<binding name="user">
<literal>test</literal>
</binding>
<binding name="databaseName">
<literal>root/xxx</literal>
</binding>
<binding name="target">
<literal>fff</literal>
</binding>
</result>
</results>
</sparql>
GET /catalogs
Returns a set of catalogs that are available on this server. For each catalog, id
and uri
properties are returned, giving respectively the name of the catalog and the URL under which it is found. Properties name readable
and writable
indicate, for each catalog, whether the user has read or write access to it.
If dynamic catalogs are enabled for the server (see the DynamicCatalogs
directive), this can be used to create a new catalog. Takes an expectedStoreSize
(integer) parameter, which sets the default expected size parameter for this catalog. Dynamic catalogs can also be created with the agtool catalogs tool.
Deletes a catalog. Only dynamic catalogs (those created through HTTP) can be deleted in this way. Dynamic catalogs can also be deleted with the agtool catalogs tool. Deleting a dynamic catalog also deletes all the repos it contains.
GET /versionReturns the version of the AllegroGraph server, as a string. For example 4.0
.
Return the date on which the server was built.
GET /version/revisionReturn the git hash of the revision that the server was built from.
POST /reconfigurePosting to this URL will cause the server to re-read its configuration file, and update itself to reflect the new configuration.
POST /reopenLogCauses the server to re-open its log file. This is useful for log rotation.
GET /jobsReturns lists of strings of the form ("uuid" "age" "description" [unused]), where "uuid" is the job UUID; "age" is the time since the job was created, in seconds; and "description" is the query string. Only query jobs are returned. The fourth element of the list will also be a string but is not currently used and so is not meaningful. Specifying a content-type of application/json
will return the lists in JSON format.
Requires a single parameter, job UUID, which specifies the id of the job to cancel. Cancels the specified job.
Scripting the serverAllegroGraph supports server-side scripting in both Common Lisp and JavaScript. Typical uses of such scripts include defining Prolog functors, creating custom services, and defining stored procedures.
Before scripts can be used by AllegroGraph, they must first be uploaded to the server. Scripts that are intended for site-wide use are referred to as Sitescripts. Scripts that are intended for use with a specific repository only are referred to as Reposcripts.
Scripts that have a '.js' extension will be interpreted as Javascript source. Any other filename will be assumed to contain Common Lisp source, with the exception of files having a '.fasl' extension. Common Lisp source files will be compiled before being loaded and executed. Files with a .fasl extension are assumed to contain compiled Common Lisp code, and they will only be loaded. You are responsible for ensuring that fasl files are compatible with the server. (Note that fasl files from one version of Allegro CL cannot, in general, be loaded into a different version.) The default package for Common Lisp source is the db.agraph.user
package. Once loaded, a script will not be reloaded unless the source file on the server has a modification date later than the time at which the last load of said script occurred.
Scripting is a powerful feature. As such, superuser permission is required in order to upload Sitescripts, while write access to a repository is necessary to upload Reposcripts.
Once scripts have been uploaded to the server, they may be loaded and used to interact with data stored in AllegroGraph. There are two ways to load scripts: specifying them in a 'script' query parameter when starting a dedicated session, or including an 'x-scripts' header along with your HTTP request (x-scripts headers will cause scripts to be loaded for any request that operates on a valid repository). The value of the header is a comma-separated list of script names. Both Sitescripts and Reposcripts can be specified. In the event that both a Sitescript and a Reposcript have the same name, in most cases, the Reposcript will be loaded. Federated triple-stores will only load Sitescripts, since they, by definition, operate on multiple repositories.
While scripts may be loaded into shared back-ends, it is highly recommended that scripts only be used with dedicated sessions. Please read the section on Sessions for further details.
Sitescript API GET /scriptsReturn the list of Sitescripts currently on the server. When a user creates a session, they can choose to load one or more of these scripts into the session.
GET /scripts/[path]Return the contents of the Sitescript with the given name.
PUT /scripts/[path]Add or replace the named Sitescript. The body of the request should contain the new script. Scripts that have a '.js' extension will be interpreted as Javascript source. Scripts whose name ends in .fasl are assumed to be compiled Lisp code (you are responsible for ensuring that it is compatible with the server), anything else is assumed to be Common Lisp source, which the server will compile.
DELETE /scripts/[path]Delete the Sitescript with the given name.
Reposcript API GET /repositories/[name]/scriptsReturn the list of Reposcripts currently on the server for the named repository. When a user creates a session, they can choose to load one or more of these scripts into the session.
GET /repositories/[name]/scripts/[path]Return the contents of the Reposcript with the given name.
PUT /repositories/[name]/scripts/[path]Add or replace the named Reposcript. The body of the request should contain the new script. Scripts that have a '.js' extension will be interpreted as containing Javascript source. Scripts whose name ends in .fasl are assumed to be compiled Lisp code (you are responsible for ensuring that it is compatible with the server), anything else is assumed to be Common Lisp source, which the server will compile.
DELETE /repositories/[name]/scripts/[path]Delete the Reposcript with the given name.
GET /initfileAn initialization-file can be specified for a server, which is a collection of Common Lisp code that is executed in every shared back-end (see below) as it is created. This retrieves that file.
PUT /initfileReplace the current initialization file with the body of the request. Takes one boolean parameter, restart
, which defaults to true, and specifies whether any running shared back-ends should be shut down, so that subsequent requests will be handled by back-ends that include the new code.
Remove the server's initialization file.
Defining Custom ServicesTo be able to provide an HTTP interface to scripted programs, AllegroGraph provides a special macro that allows one to easily define HTTP services. Services created this way will be available under the /custom/[name]
suffix of a store. They will run with db.agraph:*db*
bound to the store. For example:
(in-package #:db.agraph.user)
(custom-service
:get "r" "talk-about-me" :triple-cursor ()
(let ((me (upi !<http://example.com/me>)))
(db.agraph.cursor:make-transform-cursor (get-triples)
(lambda (triple) (setf (subject triple) me) triple))))
This, when put into the initfile, will cause /repositories/x/custom/talk-about-me
to return all triples in that store, with their subjects replaced by <http://example.com/me>
.
The macro arguments look like this:
(methods permissions name result-type arguments &body body) methods
Should be either one of `(:get :post :put :delete), or a list of them. Indicates the HTTP methods on which this service is reacheable. permissions
Should be a string containing zero or more of the characters rwWes
, indicating the permissions needed by a user to access this service. r
stands for read, w
for write, e
for eval, and s
for superuser. A capital W
indicates that this service will mutate the store. name
The name of the service. Can be any string. result-type
The type of the value the service returns. This has to be known in order for the server to be able to do content-negotiation, and to be able to write the value out in all suitable formats. Allowed types are :string
, :integer
, :float
, :boolean
, :list
(list of strings or triple-parts), :json
(anything serializeable as JSON, using ST-JSON), or :triple-cursor
(an AllegroGraph-style cursor). You can specify :dynamic
here if you want the server to look at the returned value and dynamically determine a suitable output format. arguments
A list of argument specifications, each in the form (name type)
for a required argument (for example (username :string)
), or (name type :default DEFAULT-VALUE)
for an optional argument (for example (size :integer :default 100)
). Since all arguments must have a value when a request is processed, optional arguments must have a default value specified. The arguments will be extracted from the HTTP request, and bound to the given variable names. As type, one can specify :string
, :integer
, :float
, :boolean
(true
or false
), :list
(for arguments that can be specified multiple times), :body
(the request body), :method
(the request method), or :content-type
(the content-type specified for the request body). Arguments are checked, so requests that leave off arguments for which no default is specified, or pass something that can't be interpreted as the correct type, will return an error response before the service body is even run. body
This is the code that gets run to produce the service's response value. It will run with the argument names bound to their values.
OWL 2 RL Materialization PUT /repositories/[name]/materializeEntailed
Adds or replaces materialized triples in the store that are generated by entailment. This allows reasoning queries over the store without turning reasoning on at query time. By default the materializer only entails triples according to RDFS++ rules. Additional rules can be specified. Returns an integer count of the number of entailed triples added. See Materializer for more information.
with - This parameter can be specified multiple times to select additional rules for the materializer. See materialize-entailed-triples in Materializer for possible rulesets.
without - This parameter can be specified multiple times to deselect rules for the materializer. If without appears sans the "with" parameter, then it is assumed all rules except those specified by without will be used. Again see materialize-entailed-triples in Materializer.
useTypeSubproperty - A boolean which defaults to false. When true and possible, the materializer prefers using types which are rdfs:subPropertyOf rdf:type in entailed triples rather than using rdf:type directly.
commit - A positive integer. Will cause a commit to happen after every N added statements. Can be used to work around the fact that committing a huge amount of statements in a single transaction will require excessive amounts of memory.
Deletes any previously materialized triples. Return an integer count of the number of materialized triples removed.
User managementThe AllegroGraph server uses a simple access-control scheme. Requests made with a valid Authorization
header get assigned to the authorized user. Each user has a set of permissions, which are used to determine whether the request can be made.
The user named anonymous
plays a special role. When such a user exists, any request made without authentication information is assigned to that user. By default, no anonymous user is defined, which disallows anonymous access.
The following permissions flags are defined:
super
super
permission automatically grants all other permissions.
eval
session
On top of that, read
and write
access can be specified per catalog and per store (as well as globally). read
access allows one to query a repository, with write
access one can also modify it. At the catalog level, write
access permits the deleting and creating of repositories.
Each user can be assigned to a set of roles, each of which can also be granted permissions. The effective set of permissions that a user has is the union of their own permissions and those of their roles.
Most of the user-management services are only available to super-users. A normal user is allowed to inspect its own permissions, the status of its account, manage its user-data, and delete its own account.
GET /usersReturns a list of names of all the users that have been defined. For example:
GET /users HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
["anonymous", "marijnh", "testuser"]
PUT /users/[name]
Create a new user. Expects a password
parameter, which specifies the user's password (can be left off when creating the anonymous user).
Delete a user.
POST /users/[name]/passwordChange the password for the given user. The new password should be in the body of the request.
GET /users/[name]/password/expiredReturn a boolean indicating whether the user's password is expired.
POST /users/[name]/password/expiredExpire the user's password. If the password is expired, then any attempt to log in will result in HTTP error 401 with the message "Password expired.". However, an exception is made for changing one's own password: it can be done with an expired password. Changing the password cancels the expired status.
GET /users/[name]/rolesRetrieves a list of names, indicating the roles this user has.
PUT /users/[name]/roles/[role]Add a role to a user.
DELETE /users/[name]/roles/[role]Remove a role from a user. For example:
DELETE /users/anonymous/roles/project_a HTTP/1.1
GET /users/[name]/permissions
List the permission flags that have been assigned to a user (any of super
, eval
, session
). This is what a request fetching permission flags as plain text looks like:
GET /users/marijnh/permissions
Accept: text/plain
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
eval
session
GET /users/[name]/effectivePermissions
Retrieve the permission flags assigned to the user, or any of its roles.
PUT /user/[name]/permissions/[type]Assigns the given permission to this user. type
should be super
, eval
, or session
.
Revokes the given permission for this user.
GET /users/[name]/accessRetrieve the read
/write
access for a user. This returns a result set, each element of which has a read
, write
, catalog
, and repository
component. The first two are booleans, the latter two strings. For permissions granted globally, catalog
and repository
will have a value of "*"
, for those granted per-catalog, only repository
will be "*"
. catalog
normally contains the catalog name, but for the root catalog "/"
is used.
For example, read access to all repositories in the public
catalog is specified (in JSON format) by:
{read: true, write: false, catalog: "public, repository: "*"}
Whereas read/write access to repository scratch
in the root catalog would be:
{read: true, write: true, catalog: "/", repository: "scratch"}
GET /users/[name]/effectiveAccess
As above, but also includes the access granted to roles that this user has.
PUT /users/[name]/accessThis is used to grant read
/write
access to a user. It takes four parameters:
read
read
access. A boolean, defaults to false.
write
write
access. Boolean, defaults to false.
catalog
*
to grant access on all catalogs. Again, use /
for the root catalog.
repository
*
, or leaving the parameter off, means all repositories in the given catalog.
This request grants the user testuser
read access to all repositories in the root catalog:
PUT /users/testuser/access?read=true&catalog=%2f HTTP/1.1
DELETE /users/[name]/access
Takes the same parameters as PUT
on this URL, but revokes the access instead of granting it.
Each user has a simple key-value store associated with it. This is mostly used by WebView to save some settings, but can be useful in other applications.
This service returns a result set containing id
and uri
fields, listing the keys stored for this user, and the URL under which their data can be found (see below).
Fetches the user-data under key
. Returns a string.
Stores data in a user's key-value store. The request body should contain the data to store.
DELETE /users/[name]/data/[key]Deletes data from a user's key-value store.
GET /users/[name]/security-filters/[type]Get list of security filters for a user.
type
allow
or disallow
Items returned have keys s
, p
, o
, and s
per the parameters for POST security-filters.
Create filter for the user.
type
allow
or disallow
Parameters are all optional:
s
o
p
g
See also roles/security-filters. and Security filters Documentation.
DELETE /users/[name]/security-filters/[type]Delete filter for the user.
The type
and parameters (s
, p
, o
, and s
) are the same as for POST security-filters.
Returns a boolean indicating whether the user's account is suspended. All accounts start as unsuspended. They can be suspended explicitly by a superuser, in which case the account is suspended until explicitly unsuspended by a superuser. Accounts may also be suspended because of too many consecutive failed logins if the configuration option MaxFailedLogins
is set appropriately. Accounts suspended for that reason can be unsuspended by a superuser but also may be unsuspended automatically after a period of time if the configuration option AccountUnsuspendTimeout
is set appropriately. Suspended users cannot log in.
Suspends the user's account.
DELETE /users/[name]/suspendedUnsuspends the user's account.
GET /users/[name]/enabledReturns a boolean indicating whether the user's account is enabled. All accounts start as enabled and they stay enabled until disabled explicitly by superuser.
POST /users/[name]/enabledEnables the user's account.
DELETE /users/[name]/enabledDisables the user's account.
GET /rolesReturns the names of all roles that have been defined.
PUT /roles/[role]Creates a new role.
DELETE /roles/[role]Deletes a role. Any users that have been assigned this role will lose it.
GET /roles/[role]/permissionsLists the permission flags granted to a role.
PUT /roles/[role]/permissions/[type]Grant a role a certain permission. type
should be super
, eval
, or session
.
Revoke a permission for a role.
GET /roles/[role]/accessQuery the access granted to a role. Returns a result in the same format as the equivalent service for users.
PUT /roles/[role]/accessGrant read
/write
access to a role. See here for the parameters that are expected.
Revoke read
/write
access for a role. Accepts the same parameters as above.
Get list of security filters for a role.
Same parameters as user/security-filters.
POST /roles/[role]/security-filters/[type]Create filter for the role.
Same parameters as user/security-filters.
DELETE /roles/[role]/security-filters/[type]Delete filter for the role.
Same parameters as user/security-filters and POST role/security-filters.
Catalog interface GET /protocolReturns the protocol version of the Sesame interface, as an integer. The protocol described in this document is 4
.
Lists the repositories in this catalog. The result is a set of tuples containing id
(the name of the repository) and uri
(a link to the repository) fields. The fields readable
and writable
indicate whether your user has read and write access to the repository. Finally, there is the title
field, which exists for Sesame compatibility, and contains the same value as the id
field.
GET /repositories HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
[{"uri":"http://localhost:10035/repositories/store1",
"id":"store1",
"title":"store1",
"readable":true,
"writable":true},
... other stores ...]
Or, when fetching the list of repositories in a non-root catalog:
GET /catalogs/people/repositories HTTP/1.1
PUT /repositories/[name]
Creates a new, empty repository. Supports several optional configuration arguments:
expectedSize
index
When a repository with the given name already exists, it is overwritten, unless a parameter override
with value false
is passed.
Delete a repository. Might fail if someone else is accessing the repository.
Repository interface GET/POST /repositories/[name](Note that if no query
parameter is given, this will return the WebView HTML page instead of the service described here.)
This URL is used to run queries against a repository. It conforms to both the Sesame and SPARQL protocol interfaces â this is why some of the parameter names may look inconsistent.
SPARQL/Update queries are allowed, but only when the request is made using POST
instead of GET
(and the user has write access to the repository).
This service takes the following parameters:
query
queryLn
sparql
or prolog
. Defaults to sparql
.
infer
false
, no reasoning. Other options are rdfs++
(same as true
), and restriction
for hasValue as well as rdf++ reasoning.
context
FROM
). When no context is given, all graphs are used. The string null
can be used to refer to the default graph of the store.
namedContext
context
, except that the named graphs retain their names (as in FROM NAMED
).
default-graph-uri
context
except that plain-URI values should be given, without wrapping <
and >
characters.
named-graph-uri
default-graph-uri
, except that this specifies the named graphs to be used.
limit
offset
offset
results in result set.
$[varname]
$
character can be used to bind query variables to fixed value (an N-Triples term) when executing a SPARQL query.
checkVariables
defaultGraphName
planner
save
timeout
The result formats supported depends on the query. Prolog queries return tabular data, as do SPARQL select
queries. describe
or construct
queries return triples, and ask
queries return a boolean value.
Prolog queries are allowed to return nested lists of results, in which case the result can only be returned as application/json
, and the nested structure (both in the list of column names and in the results themselves) will be represented as nested JSON arrays.
To conserve resources, makes the database instance and its child processes exit if the instance is unused. Takes no arguments, returns nothing.
Normally unused database instances linger for InstanceTimeout seconds to speed up subsequent open operations.
GET /repositories/[name]/sizeReturns the number of statements in the repository, as an integer. Takes a single optional argument, context
, which can be used to restrict the count to a single named graph. Note if the repository is a multi-master repository, the returned size may be inaccurate. See Triple count reports may be inaccurate in the Multi-master Replication document.
Retrieves statements (triples) by matching against their components. All parameters are optional â when none are given, every statement in the store is returned.
subj
subjEnd
subj
parameter is given. Matches a range of subjects.
pred
subj
, to match a set.
predEnd
obj
objEnd
context
contextEnd
limit
offset
infer
false
, rdfs++
, and hasvalue
. Default is false
â no reasoning.
Exposes the exact same interface as GET /statements
. Useful when you need to make a POST request because of URL-length limitations.
Deletes statements matching the given parameters. When no parameters are given, every triple in the store is deleted. Returns the number of triples deleted.
subj
pred
obj
context
This deletes all statements in the graph named "A"
(of which there are 25).
DELETE /repositories/repo1/statements?context=%22A%22 HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
25
PUT/POST /repositories/[name]/statements
Add statements to a store. When the PUT
method is used, the store is emptied first, whereas the POST
method just adds triples. The value returned is an integer indicating the number of triples loaded.
The Content-Type
header determines the way the given data is interpreted. See the HTTP reference for details on the supported formats.
Normally, the body of the request is used as input data. Alternatively, one can pass a file
parameter to indicate a (server-side) file-name to be loaded, or a url
parameter to load a file directly off the web. Other supported parameters include:
baseURI
context
commit
N
added statements. Can be used to work around the fact that importing a huge amount of statements in a single transaction will require excessive amounts of memory.
continueOnError
externalReferences
relaxSyntax
See the HTTP reference for a complete list of query parameters.
This request, where [URL]
is the encoded form of some URL that contains an N-triples file, loads the triples from that URL into the scratch
store under context <http://example.org#test>
.
POST /repositories/scratch/statements?url=[URL]&context=%3Chttp%3A%2F%2Fexample.org%23test%3E HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
2530
Preparing Queries
It is possible to cache the parsing and parameterization of SPARQL queries. This is currently only recommended for very big queries that get repeated a lot, since the savings are not very large.
Preparing queries is only supported in a dedicated session, and the prepared queries will only be available in that session.
GET/POST /repositories/[name]/queries/[id]Executes a prepared query. Supports the limit
, offset
, and bound-variable ($[varname]
) parameters in the same way as the regular query interface, and takes all the other parameters from the prepared query stored under the name id
(either through the save
argument to a regular query, or a PUT
request to this URL.
Prepares a query. Accepts the query
, infer
, context
, namedContext
, default-graph-uri
, named-graph-uri
, checkVariables
, defaultGraphName
, and planner
arguments in the same way that the regular query interface accepts them, but instead of executing the query, it prepares it and saves it under id
.
Deletes the prepared query stored under id
.
The application/x-rdftransaction
mime type, as accepted by the /statements service, is not standardized or even widely documented, so we quickly describe it here. The format originates from the Sesame HTTP protocol.
Documents of this type are XML documents containing a number of triple additions and removals. They are executed as a transaction - either completely, or not at all. (Though note that this does not mean an implicit commit is executed on dedicated session.)
An RDF transaction document looks roughly like this:
<transaction>
<add>
<bnode>person4</bnode>
<uri>http://www.w3.org/1999/02/22-rdf-syntax-ns#type</uri>
<uri>http://franz.com/simple#person</uri>
</add>
<add>
<bnode>person4</bnode>
<uri>http://franz.com/simple#birth</uri>
<literal datatype="http://www.w3.org/2001/XMLSchema#date">1917-05-29</literal>
</add>
<remove>
<null/>
<uri>http://franz.com/simple#first-name</uri>
<null/>
</remove>
<clear>
<uri>http://franz.com/simple#context1</uri>
</clear>
</transaction>
A transaction's root tag is always transaction
. Inside of this, any amount of actions are specified, which are executed in the order in which they are given.
The add
action adds a triple. It should contain at least three nodes, which specify the subject, predicate, and object of the triple. After that, any number of nodes may follow, which specify contexts that the triple should be added to. A null
tag can be used to specify the default context. If no contexts are given, the triple is inserted into only the default context.
These child nodes can be either an uri
tag containing a resource's URI, a bnode
tag containing an ID that is used to be able to refer to the same blank node multiple times in the document, or a literal
tag, that contains a string. literal
tags may have datatype
or xml:lang
attributes to assign them a type or a language.
In a remove
tag, the first three child nodes optionally specify the subject, predicate, and object of the triples to remove. Any of these can be given as a null
tag (or left out altogether) to count as a wild-card. After these, any number of nodes can follow, which specify the contexts to remove nodes from. If none are given, nodes are removed from all contexts. A null
tag is used to specify the default context here. removeFromNamedContext
works the same, but requires a single context to be specified.
clear
also removes triples, but without the option to specify subject, predicate, or object. All child nodes are interpreted as contexts. Again, not specifying any contexts causes nodes to be removed from all contexts.
Used to delete a set of statements. Expects a JSON-encoded array of triples as the posted data, and deletes all statements listed in there. Content-Type
should be application/json
.
When an ids
parameter with the value true
is passed, the request body should contain a JSON-encoded list of triple-ids, instead of actual triples.
Fetches a set of statements by ID. Takes any number of id
parameters, and returns a set of triples.
Find the set of unique terms in a column. column
can be one of obj
, pred
, subj
, or context
. Without arguments, it simply finds the set of all terms that occur in that column in the database. The query can be further refined by passing optional obj
, pred
, subj
, or context
parameters, which, when given a term, restrict the result to only triples that contain that term in that position. For example, to get all outgoing predicates from term <http://example.com>
, query /unique/pred
with the parameter subj
set to <http://example.com>
.
Returns a set of terms.
GET /repositories/[name]/statements/duplicatesGets all duplicate statements that are currently present in the store. The mode
parameter can be either spog
(the default) or spo
to indicate which components of each triple must be equivalent to count as duplicates of each other. See Deleting Duplicate Triples.
Deletes all duplicate statements that are currently present in the store. The mode
parameter can be either spog
(the default) or spo
to indicate which components of each triple must be equivalent to count as duplicates of each other. See Deleting Duplicate Triples
Returns the duplicate suppression strategy currently active in the store. This returns either false
if no duplicate suppression is active, or spog
if it is active. See Deleting Duplicate Triples.
Sets the duplicate suppression strategy for the store. The type
argument can be either false
(disable duplicate suppression), spo
(enable it, eliminate all spo duplicates on commit) or spog
(enable it, eliminate all spog duplicates on commit). See Deleting Duplicate Triples.
Disable duplicate suppression for the store. This is the equivalent of using PUT /repositories/(name)/suppressDuplicates with false
as the type
argument. See Deleting Duplicate Triples.
Fetches a list of named graphs in the store. Returns a set of tuples, each of which only has a contextID
field, which is an N-triples string that names the graph.
GET /catalogs/people/repositories/repo1/contexts HTTP/1.1
Accept: application/sparql-results+xml
HTTP/1.1 200 OK
Content-Type: application/sparql-results+xml; charset=UTF-8
<?xml version="1.0"?>
<sparql xmlns="http://www.w3.org/2005/sparql-results#">
<head><variable name="contextID"/></head>
<results>
<result>
<binding name="contextID">
<literal>A</literal>
</binding>
</result>
</results>
</sparql>
POST /repositories/[name]/functor
Define Prolog functors, which can be used in Prolog queries. This is only allowed when accessing a dedicated session.
The body of the request should hold the definition for one or more Prolog functors, in Lisp syntax, using the <--
or <-
operators.
Begin a new transaction. It is an error if there is already an active transaction (400 Bad Request is returned with message NESTED TRANSACTION Cannot begin a new transaction while there is one already active). This request is only meaningful in dedicated sessions and with Sesame 2.7 transaction handling semantics. In shared back-ends or with Sesame 2.6 semantics, it does nothing.
POST /repositories/[name]/commitCommit the current transaction. Only meaningful in dedicated sessions.
POST /repositories/[name]/rollbackRoll back the current transaction (discard all changes made since the beginning of the transaction). Only meaningful in dedicated sessions.
POST /repositories/[name]/evalEvaluates the request body in the server. By default, it is evaluated as Common Lisp code, with *package*
set to the db.agraph.user
package. If you specify a Content-Type
of text/javascript
, however, the code will be interpreted as JavaScript.
Makes an attempt to return the result in a sensible format, falling back to printing it (as per prin1) and returning it as a string.
GET/POST /repositories/[name]/freetextPerform a query on the free-text indices of the store, if any. A list of matching triples is returned.
pattern
expression
should be passed. Putting multiple words in this argument means 'match only triples with all these words'. Double-quoting a part of the string means 'only triples where this exact string occurs'. Non-quoted words may contain wildcards - *
(matches any string) and ?
(matches any single character). Or they can end in ~
to do a fuzzy search, optionally followed by a decimal number indicating the maximum Levenshtein distance to match. A vertical bar (|
) can be used between patterns to mean 'documents matching one of these patterns', and parentheses can be used to group sub-patterns. For example: "common lisp" (programming | develop*)
.
expression
and
, or
, phrase
, match
, and fuzzy
. For example (and (phrase "common lisp") (or "programming" (match "develop*")))
.
index
sorted
limit
offset
GET /repositories/repo1/freetext?pattern=RDF HTTP/1.1 Accept: text/plain
HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8
http://example.com/node1 http://example.com/name "AGraph RDF store". .... others ....
Returns a list of names of free-text indices defined in this repository.
GET /repositories/[name]/freetext/indices/[index]Only returns application/json
responses. Returns the configuration parameters of the named free-text index. This will be an object with the following fields:
predicates
indexLiterals
true
(index all literals), false
(no literals), or an array of literal types to index.
indexResources
true
(index resources fully), false
(don't index resources), or the string "short"
to index only the part after the last #
or /
in the resource.
indexFields
"subject"
, "predicate"
, "object"
, and "graph"
. This indicates which fields of a triple are indexed.
minimumWordSize
stopWords
wordFilters
innerChars
borderChars
tokenizer
default
or japanese
).
If [param]
is one of the slot values mentioned above, the corresponding configuration parameter of the index is returned.
Create a new free-text index. Takes the following parameters:
predicate
indexLiterals
indexLiteralType
indexLiterals
is true, this parameter can be given any number of times to restrict the types of literals that are indexed. When not given, all literals (also untyped ones) are indexed.
indexResources
true
, false
, or short
. Default is false
. short
means to index only the part of the resource after the last #
or /
character.
indexField
subject
, object
, predicate
, or graph
. Determines which fields of a triple to index. Defaults to just object
.
minimumWordSize
stopWord
wordFilter
stem.english
(a simple English-language stemmer), drop-accents
(will turn 'é' into 'e', etc.), and soundex
, for the Soundex algorithm.
innerChars
alpha
- all (unicode) alphabetic charactersdigit
- all base-10 digitsalphanumeric
- all digits and alphabetic characters-
) character, followed by another single character.borderChars
innerChars
.
tokenizer
default
or japanese
. When japanese
is passed, the tokenizer is based on morphological analysis, and the innerChars
and borderChars
parameters are ignored. For japanese
, it is also recommended to set minimumWordSize
to either 1 or 2.
This can be use to reconfigure a free-text index. It takes the all the parameters that the PUT service takes. Parameters not specified are left at their old values. To indicate that the predicate
, indexLiteralType
, indexField
, stopWord
, or wordFilter
parameters should be set to the empty set instead of left at their default, pass these parameters once, with the empty string as value.
The parameter reIndex
, a boolean which defaults to true, the client can control whether a full re-indexing of the modified index should take place, or whether the new settings should be used when indexing triples added after the redefinition.
Delete the named index from the repository.
POST /repositories/[name]/blankNodesAsk the server to allocate and return a set of blank nodes. Takes one argument, amount
, which should be an integer.
These nodes can, in principle, be used to refer to nodes when using other services. Note, however, that a lot of the standards related to RDF give blank nodes a document-wide scope, which means that referring to blank nodes by name from, for example, a SPARQL query or N-Triples document is not possible. The nodes are interpreted as local to the document, and assigned to a new blank node.
POST /repositories/repo1/blankNodes?amount=2 HTTP/1.1
Accept: text/plain
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
_:s1x49c10bbc
_:s2x49c10bbc
GET /repositories/[name]/tripleCache
Find out whether the 'SPOGI cache' is enabled, and what size it has. Returns an integer â 0 when the cache is disabled, the size of the cache otherwise.
PUT /repositories/[name]/tripleCacheEnable the spogi
cache in this repository. Takes an optional size
argument to set the size of the cache.
Disable the spogi
cache for this repository.
Returns a boolean that tells you whether this repository is currently in no-commit mode. When this mode is active, all commits from any clients will return an error, effectively preventing writing to the store. This is mostly useful for warm standby clients, but can also be used to enforce read-only stores in other situations.
PUT/DELETE /repositories/[name]/noCommitTurns no-commit mode on (PUT) or off (DELETE).
GET /repositories/[name]/bulkModeReturns a boolean indicating whether bulk-load mode is enabled for the repository.
PUT/DELETE /repositories/[name]/bulkModeTurn bulk-load mode on (PUT) or off (DELETE).
NamespacesIn order to make queries shorter and more readable, a user can define namespaces, which will be used for queries issued by this user.
GET /repositories/[name]/namespacesList the currently active namespaces for your user, as tuples with prefix
and namespace
(the URI) fields. For example:
GET /catalogs/scratch/repositories/repo2/namespaces HTTP/1.1
Accept: application/json
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
[{"prefix":"rdf","namespace":"http://www.w3.org/1999/02/22-rdf-syntax-ns#"},
{"prefix":"owl","namespace":"http://www.w3.org/2002/07/owl#"},
... etc ...]
DELETE /repositories/[name]/namespaces
Deletes all namespaces in this repository for the current user. If a reset
argument of true
is passed, the user's namespaces are reset to the default set of namespaces.
Returns the namespace URI defined for the given prefix.
PUT/POST /repositories/[name]/namespaces/[prefix]Create a new namespace. The body of the request should contain the URI for the namespace, as plain text, as in:
POST /catalogs/scratch/repositories/repo2/namespaces/ex HTTP/1.1
Content-Type: text/plain; charset=UTF-8
http://www.example.com/
DELETE /repositories/[name]/namespaces/[prefix]
Delete a namespace.
Type mappingsAllegroGraph supports storing some types of literals in encoded, binary form. This typically makes them smaller (less disk usage), and makes it possible to run range queries against them. For more details, see Datatypes and the Lisp reference for Data-type and Predicate Mapping.
Specifying which literals should be encoded can be done in two ways. There is 'datatype mapping', where a literal type is marked, and all literals of that type are encoded, and there is 'predicate mapping', where the objects in a statement with a given predicate are encoded.
When specifying a mapping, one has to choose an encoding to apply. To identify these encodings, we use XSD datatypes:
<http://www.w3.org/2001/XMLSchema#byte>
<http://www.w3.org/2001/XMLSchema#short>
<http://www.w3.org/2001/XMLSchema#int>
<http://www.w3.org/2001/XMLSchema#long>
<http://www.w3.org/2001/XMLSchema#unsignedByte>
<http://www.w3.org/2001/XMLSchema#unsignedShort>
<http://www.w3.org/2001/XMLSchema#unsignedInt>
<http://www.w3.org/2001/XMLSchema#unsignedLong>
<http://www.w3.org/2001/XMLSchema#float>
<http://www.w3.org/2001/XMLSchema#double>
<http://www.w3.org/2001/XMLSchema#time>
<http://www.w3.org/2001/XMLSchema#date>
<http://www.w3.org/2001/XMLSchema#dateTime>
Geospatial types (including cartesian, spherical and n-Dimensional) can be defined in this way. In each case, the datatype must be specified using its URL. E.g., the 10x10 cartesian mapping with resolution 1 would use the URL:
<http://franz.com/ns/allegrograph/3.0/geospatial/cartesian/0.0/10.0/0.0/10.0/1.0>
Typed literals of these types will be encoded by default. For other types, you have to specify your mapping before you import your data to have the encoding take place.
GET /repositories/[name]/mappingFetches a result set of currently specified mappings. Each row has a kind
(datatype
or predicate
), part
(the resource associated with the mapping), and encoding
fields.
Clear all non-automatic type mappings for this repository.
DELETE /repositories/[name]/mapping/allClear all type mappings for this repository including the automatic ones.
GET /repositories/[name]/mapping/typeYields a list of literal types for which datatype mappings have been defined in this store.
GET /repositories/test/mapping/type HTTP/1.1
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
<http://www.example.com/myInteger>
<http://www.w3.org/2001/XMLSchema#unsignedShort>
<http://www.w3.org/2001/XMLSchema#dateTime>
... etc ...
PUT/POST /repositories/[name]/mapping/type
Takes two arguments, type
(the RDF literal type) and encoding
, and defines a datatype mapping from the first to the second. For example, if [TYPE]
is the URL-encoded form of <http://www.example.com/myInteger>
, and [ENC]
of <http://www.w3.org/2001/XMLSchema#int>
, this request will cause myInteger
literals to be encoded as integers:
PUT /repositories/test/mapping/type?type=[TYPE]&encoding=[ENC] HTTP/1.1
DELETE /repositories/[name]/mapping/type
Deletes a datatype mapping. Takes one parameter, type
, which should be an RDF resource.
Yields a list of literal types for which predicate mappings have been defined in this store.
PUT/POST /repositories/[name]/mapping/predicateTakes two arguments, predicate
and encoding
, and defines a predicate mapping on them.
Deletes a predicate mapping. Takes one parameter, predicate
.
Triple stores can be equipped with a variety of indices, which will affect query performance. By default, a store gets a sensible set of indices, but it is possible to tweak this set.
Indices are identified by cryptic IDs, such as spogi
, which stands for "subject, predicate, object, graph, id", the order in which the fields of triples are used when sorting for that index.
Returns a list of index IDs that are enabled for this store. When a listValid=true
parameter is passed, a list of all supported index types is returned.
Ensures that the index indicated by type
is present in this store. Takes effect at commit time (which is, of course, immediately when using a shared back-end or an auto-commit session).
Removes the index indicated by type
from the store. Also takes effect at commit time.
Tells the server to try to optimize the indices for this store. The arguments are wait
, level
, and index
. All are optional. Here is a sample call:
POST /repositories/myrepo/indices/optimize?wait=false&level=1&index&spogi&index=gposi
POST /repositories/[name]/indices/optimize?wait=[true or false] wait
defaults to false. The value true indicates the HTTP request should wait for the operation to complete rather than returning right away, which is what happens when wait is false.
POST /repositories/[name]/indices/optimize?level=[0 or 1 or 2] level
specifies how much optimization work should be done, with 0 being the least and 2 (which is the default) being the most (see Triple Indices).
POST /repositories/[name]/indices/optimize?index=[index1 name]&index=[index2 name]... index
specifies an index to be optimized. Index names are combinations of s
, p
, o
, g
, and i
(see Triple Indices). This command will not create new indices, so only existing indices should be specified. Specifying no index
means optimize all indices. index
may appear as many times as desired.
Geo-spatial queries
When literals are encoded as geo-spatial values, it is possible to efficiently perform geometric queries on them.
In order to do this, one defines a geo-spatial datatype, and then adds literals of that type to the store. AllegroGraph supports two kinds of geo-spatial datatypes, cartesian and spherical. A cartesian literal looks like this:
"+10.0-17.5"^^<[cartesian type]>
Where the numbers are the X
and Y
coordinates of the point. A spherical literal uses ISO 6709 notation, for example:
"+37.73+122.22"^^<[spherical type]>
GET /repositories/[name]/geo/types
Retrieve a list of geospatial types defined in the store.
POST /repositories/[name]/geo/types/cartesianDefine a new Cartesian geospatial type. Returns the type resource, which can be used as the type
argument in the services below.
stripWidth
xmin
, xmax
, ymin
, ymax
Add a spherical geospatial type. Returns the type resource.
stripWidth
unit
degree
, radian
, km
, or mile
. Determines the unit in which the stripWidth
argument is given. Defaults to degree
.
latmin
, longmin
, latmax
, longmax
For example, this defines a type with a granularity of 2 degrees:
POST /repositories/repo1/geo/types/spherical?stripWidth=2 HTTP/1.1
Accept: text/plain
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
<http://franz.com/ns/allegrograph/3.0/geospatial/spherical/degrees/
-180.0/180.0/-90.0/90.0/2.0>
(The newline in the URI wouldn't be there in the actual response.)
GET /repositories/[name]/geo/boxFetch all triples with a given predicate whose object is a geospatial value inside the given box.
type
predicate
xmin, ymin, xmax, ymax
limit
offset
infer
false
(no reasoning), other accepted values are rdfs++
and hasvalue
.
useContext
false
. When true
, the context (graph) field of triples is used as the geospatial value to match, rather than the object.
Retrieve triples within a circle. type
, predicate
, infer
, limit
, offset
, and useContext
argument as with /geo/box. Takes x
, y
, and radius
arguments, all floating-point numbers, to specify the circle.
Retrieve triples whose object lies within a circle in a spherical system. Takes type
, predicate
, infer
, limit
, offset
, and useContext
arguments like /geo/box, lat
and long
arguments to specify the centre of the circle, and a radius
argument. The unit used for radius
defaults to kilometre, but can be set to mile by passing a unit
parameter with the value mile
.
Create a polygon in the store. Takes a parameter resource
, which sets the name under which the polygon should be stored, and three or more point
arguments, which must be geospatial literals that represent the points of the polygon.
Retrieve triples whose object lies inside a polygon. type
, predicate
, infer
, limit
, offset
, and useContext
work as with /geo/box. The polygon
parameter must hold the name of a polygon created with the above service.
Attributes are are name/value pairs that can be associated with triples. See Triple Attributes for more information on attributes. The HTTP interface can be used to define attributes. An attribute must be defined before triples can be added with that attribute and an attribute can be associated with a triple only when it is added.
A static filter can be used to control access to triples. They can be used to require that a user have specific attributes before that user can see a triple. This access control is one of the important use cases of specifying triple attributes.
The HTTP interface to attributes
GET /repositories/[name]/attributes/definitionsSee GET catalogs / [CATNAME] / repositories / [REPONAME] / attributes / definitions in HTTP-Reference.
POST /repositories/[name]/attributes/definitionsSee POST catalogs / [CATNAME] / repositories / [REPONAME] / attributes / definitions in HTTP-Reference.
DELETE /repositories/[name]/attributes/definitionsSee DELETE catalogs / [CATNAME] / repositories / [REPONAME] / attributes / definitions in HTTP-Reference.
GET /repositories/[name]/attributes/staticFilterSee GET catalogs / [CATNAME] / repositories / [REPONAME] / attributes / staticFilter in HTTP-Reference.
POST /repositories/[name]/attributes/staticFilterSee POST catalogs / [CATNAME] / repositories / [REPONAME] / attributes / staticFilter in HTTP-Reference.
DELETE /repositories/[name]/attributes/staticFilterSee DELETE catalogs / [CATNAME] / repositories / [REPONAME] / attributes / staticFilter in HTTP-Reference.
GET /repositories/[name]/metadataReturns the metadata for a repository. Metadata comprises attribute and static filter definitions.
POST /repositories/[name]/metadataMerges the given metadata into the current metadata. Changing an attribute definition is not permitted. A commit must be done to make these changes persistent. The metadata
argument must be a string holding the metadata value. This value should have come from a previous GET of the metadata. Metadata comprises attribute and static filter definitions.
Some queries and operations are best expressed using graph-traversing generators . The following services make it possible to define such generators.
PUT /repositories/[name]/snaGenerators/[generator]Creates a new generator under the given name. Accepts the following parameters:
objectOf
subjectOf
objectOf
, but follow edges from object to subject.
undirected
query
?node
can be used to refer to the 'start' node, and whose results will be used as 'resulting' nodes. User namespaces may be used in this query.
Create a neighbor-matrix, which is a pre-computed generator.
group
generator
depth
1
.
Normal requests will be handled by one of a set of shared back-end processes. (See Server Configuration and Control.) The fact that they are shared between all incoming requests means there are certain limitations on the way they can be used--a shared back-end does not support transactions spanning multiple requests, and does not allow one to define new Prolog functors. Further, though it is possible to load scripts into shared back-ends, it is highy recommended that you not do so. Function definitions, global variables, and custom-services, once loaded, remain active in a shared back-end until that backend is killed. Javascript scripts are sandboxed for the most part and only leak custom-service definitions into the back-end. Unless you are extremely careful about what scripts are available to be loaded, it is possible that code you expect to run as part of a server-side script has been overwritten by other requests, and unexpected errors may result. A session, on the other hand, can only be accessed by the user who requested it, and so the scripts loaded can be carefully managed.
When transactions, functors, or other persistent state is needed, it is necessary to create a session. This spawns a process that you will have exclusive access to. Sessions are effectively single threaded: only one request at a time will be processed.
Requests to a session URL, unless the autoCommit
parameter is given when starting the session, are handled inside a transaction. That means modifications to the store are not visible in other sessions or in shared back-ends until a commit request is made. A rollback request can be used to discard any changes made since the beginning of the current transaction.
When making extra requests to commit and rollback is too expensive, one can use the x-commit
and x-rollback
HTTP headers (with any value) to have the rollback or commit command piggyback on another request. x-rollback
will cause the store to be rolled back before evaluating the request, x-commit
will cause a commit after evaluating the request.
Sessions time out after a certain period of idleness (can be set on creation), so an application that depends on a session being kept alive should periodically ping its session.
Sessions belonging to superusers allow their owners to use the x-masquerade-as-user
header in HTTP requests to activate another user's security filters. See user and role security-filters and Security filters documentation.
A session can execute all kinds of requests (even those that mutate the store) within or without transactions.
When within a transaction, database modifications are not visible in other sessions or in shared back-ends until a commit request is made. A rollback request can be used to discard any changes made since transaction started.
When there is no active transaction, each individual request is executed as if in a separate transaction.
It's generally true that having an active transaction is equivalent to auto-commit mode being off, but the details of how the two modes are entered and exited differ depending on whether the server is configured to operate under Sesame 2.6 or Sesame 2.7 semantics.
With Sesame 2.6 transaction handling, if the active transaction is rolled back or committed, a new one is started immediately. Auto-commit mode is only ever changed by explicit request.
With Sesame 2.7 transaction handling, transactions must be started explicitly with begin. commit and rollback do not start a new transaction. Instead they turn auto-commit mode on. Changing the auto-commit flag is deprecated in favor of begin and commit. However, to maintain some backwards compatibility, turning auto-commit off starts a new transaction, and turning it off commits the transaction.
Extra HTTP headersWhen making explicit requests to rollback, begin, or commit is too expensive, one can use the x-rollback
, x-begin
and x-commit
HTTP headers (with any value) to have the rollback, begin or commit command piggyback on another request. A request may have any combination of these extra headers. The execution semantics are as follows: before the request is performed, if x-rollback
is present, then the store is rolled back. Next, if x-begin
is present, a new transaction is started. Then the actual request is performed and the store is committed if x-commit
is supplied.
Creates a new session. Takes the following parameters:
autoCommit
false
, meaning that the session starts as if a transaction had been started. With Sesame 2.7 transaction handling semantics, the default is true
and an explicit begin is required.
lifetime
loadInitFile
false
, which determines whether the initfile is loaded into this session.
script
store
The minilanguage used by the store
parameter works as follows:
<store1>
<catalog1:store2>
<http://somehost:10035/repositories/store3>
<a> + <b>
<a>[rdfs++]
rdfs++
reasoning applied (restriction
is also supported as a reasoner type). You can specify the context that inferred triples get using this syntax: <a>[rdfs++#<http://test.org/mycontext>]
<a>{null <http://example.com/graph1>}
null
) and the graph named http://example.com/graph1
. Any number of graphs can be given between the braces.
This syntax can be composed to created federations of filtered and reasoning stores, for example <http://somehost:10035/repositories/<a>{null} + <b>[rdfs++]
.
The service returns the URL of the new session. Any sub-URLs that were valid under a repository's URL will also work under this session URL. For example, if http://localhost:55555/sessions/7e8df8cd-26b8-26e4-4e83-0015588336ea
is returned, you can use http://localhost:55555/sessions/7e8df8cd-26b8-26e4-4e83-0015588336ea/statements
to retrieve the statements in the session.
This is a shortcut for creating a session in a local triple store. It takes autoCommit
, loadInitfile
, script
, and lifetime
arguments as described above, and creates a session for the store that the URL refers to.
Explicitly closes a session.
GET [session-url]/session/pingLet the session know you still want to keep it alive. (Any other request to the session will have the same effect.)
GET [session-url]/session/isActiveReturns a boolean indicating whether there is a currently active transaction for this session. This is the logical complement of autoCommit.
GET [session-url]/session/autoCommitReturns a boolean indicating whether auto-commit is currently active for this session. This is the logical complement of isActive.
POST [session-url]/session/autoCommitUsed to change the auto-commit flag for the session. Takes a single boolean argument called on
. If it is set to true in a transaction, then the transaction is automatically committed. If it is set to false outside a transaction, then a new transaction is started. Note that with Sesame 2.7 semantics, commit and rollback set this flag to true.
Commit the current transaction. With Sesame 2.6 semantics, a new transcation is started. With Sesame 2.7 semantics, auto-commit mode is entered.
POST [session-url]/rollbackRoll back the current transaction. With Sesame 2.6 semantics, a new transcation is started. With Sesame 2.7 semantics, auto-commit mode is entered.
GET /repositories/[name]/mongoParametersReturns a JSON object with keys:
server
- server name where MongoDB is runningport
- port to use to communicate with MongoDBdatabase
- name of the database to use when querying MongoDBcollection
- name of the collection to use when querying MongoDBuser
- used to authenticate to the Mongo DB serverNote, the password set with POST is not returned.
POST /repositories/[name]/mongoParametersAccepts the following parameters:
server
- server name where MongoDB is runningport
- port to use to communicate with MongoDBdatabase
- name of the database to use when querying MongoDBcollection
- name of the collection to use when querying MongoDB, required to be non-emptyuser
- used to authenticate to the Mongo DB serverpassword
- used to authenticate to the Mongo DB serverSee also: MongoDB interface.
Process managementAn AllegroGraph server consists of a group of different operating-system processes. The following services provide a minimal process-debugging API over HTTP. All of them are only accessible to superusers.
GET /processesReturns a list of tuples showing the processes the server currently has running. Each row has pid
(the OS process ID) and a name
(the process name) properties.
Returns stack traces for the threads in the given process.
DELETE /processes/[id]Kills the specified process. Obviously, you yourself are responsible for any adverse effects this has on the functioning of the server.
POST /processes/[id]/telnetStarts a telnet server in the specified process. A random port will be chosen for the server, which will be returned as the response. Note that the telnet server will allow anyone to connect, creating a something of a security concern if the server is on an open network.
Warm StandbyWarm standby allows a second AllegroGraph server to keep an up-to-date copy of a repository on another server. In case the first server fails, this copy can be made to take over its responsibilities.
Documentation on this feature is not complete yet.
GET /repositories/[name]/warmstandbyOnly supports an application/json
accept type. Returns a representation of the current standby status for repository.
Requires jobname
, primary
, primaryPort
, user
, and password
parameters. Makes this repository start replicating the source store on the server primary
:primaryPort
, using the given credentials to gain access.
Stops a replication job. This command is sent to the client.
POST /repositories/[name]/warmstandby/switchRoleSent to a repository that is currently functioning as a replication server. Causes a client (identified by the jobname
parameter) to take over, making this repository a client of that new server. Takes primary
, primaryPort
, user
, and password
parameters, which specify the server on which the client repository lives, and the user account to use to access this server.
The becomeClient
boolean parameter, which defaults to true, determines whether the server will start replicating its old client. The enableCommit
parameter, also defaulting to true, controls whether no-commit mode will be turned off in the client.
Client URL (cURL, pronounced âcurlâ) is a command line tool that enables data exchange between a device and a server through a terminal. It can be used to communicate from a shell to an AllegroGraph server. We used it in the auditing examples above. Here are some more examples showing how to do various common things. Note that all these things can be done using the AllegroGraph command line tool agtool, which has a much simpler interface.
Creating a repo using cURLIn our example, the user is test
and the password is xyzzy
. This creates a repository:
$ curl -X PUT -u test:xyzzy "http://machine1.company.com:10035/repositories/test"
We get a list of repos (the kennedy
already existed when the test
repo was added):
$ curl -X GET -u test:xyzzy http://machine1.company.com:10035/repositories
uri: http://machine1.franz.com:10035/repositories/kennedy
relativeUri: repositories/kennedy
id: kennedy
title: kennedy
readable: true
writable: true
[...]
uri: http://machine1.company.com:10035/repositories/test
relativeUri: repositories/test
id: test
title: test
readable: true
writable: true
[...]
$
Adding data to a repo using cURL
Now let us add some data to the test
repo. We create an ntriples file mydata.nt with these contents:
<http://example.org#alice> <http://example.org#name> "Alice" .
<http://example.org#bob> <http://example.org#name> "Bob" .
<http://example.org#alice> <http://example.org#age> "26" .
<http://example.org#bob> <http://example.org#age> "33" .
This curl command loads the data:
$ curl -X POST http://machine1.company.com:10035/repositories/test/statements \
-u test:xyzzy --data "@mydata.nt" --header "Content-Type: text/plain"
4
$
This one shows the data was loaded properly:
$ curl -X GET http://machine1.company.com:10035/repositories/test/statements \
-u test:xyzzy
<http://example.org#alice> <http://example.org#name> "Alice" .
<http://example.org#bob> <http://example.org#name> "Bob" .
<http://example.org#alice> <http://example.org#age> "26" .
<http://example.org#bob> <http://example.org#age> "33" .
$
A SPARQL query using cURL
Now, let's use curl to query the data. This query:
select ?n ?age {
<http://example.org#alice> <http://example.org#name> ?n ;
<http://example.org#age> ?age
}
return n "Alice" age 26
. Here is a curl command to get those results:
$ curl -u test:xyzzy --header 'Accept: application/json' \
-d 'query=select ?n ?age { <http://example.org#alice>
<http://example.org#name> ?n ; <http://example.org#age> ?age }' \
-d 'limit=1000' http://machine1.company.com:10035/repositories/test
{"names":["n","age"],"values":[["\"Alice\"","\"26\""]]}
$
Here is a similar curl command with the same results:
$ curl -u test:xyzzy \
-X POST "http://machine1.company.com:10035/repositories/test/sparql" \
-H "Accept: application/json" \
-H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "query=SELECT ?n ?age WHERE { <http://example.org#alice> <http://example.org#name> ?n ; <http://example.org#age> ?age . }"
{"names":["n","age"],"values":[["\"Alice\"","\"26\""]]}
Multi-master replication
The multi-master replication facility is described in Multi-master Replication.
createClusterPUT catalogs / [CATNAME] / repositories / [REPOSITORY] / repl / createCluster
Here is a CURL command which converts the repository foo
in the root catalog (since no catalog is specified). We specify a group (group=first
). The port is 10035.
$ curl -X PUT -u user:mypassword "http://machine1.franz.com:10035/repositories/foo/repl/createCluster?instanceName=fooinst&host=machine1&group=firstg&ifExists=supersede&user=user&password=mypassword&port=10035"
growCluster
Here is a CURL command which grows the cluster just created making a copy on host machine2.franz.com
.
$ curl -X PUT -u user:mypassword "http://machine1.franz.com:10035/repositories/foo/repl/growCluster?instanceName=foobinst1&host=machine2.franz.com&name=foo&group=firstg&user=user1&password=machine2pw&port=10035"
Other commands
All are described in the HTTP reference document.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4