A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/pgRouting/pgrouting/issues/440 below:

Integration of Contraction Heirarchy code into pgrouting · Issue #440 · pgRouting/pgrouting · GitHub

On 1/3/2016 8:19 AM, rohith reddy wrote:

Hey Steve,
As you know, me and Mukul are working on contraction hierarchies
for routing.I have been reading papers on contraction hierarchies for
road networks.
In most of them,the method of contraction is as follows,

1.Node ordering for contraction(based on some criterion).
2.Contracting the node with the highest priority(more likely to be
contracted).
3.Introducing(upon decision) shortcuts between the nodes adjacent to
this node.

Some of the node ordering techniques I found are,

  1. Edge difference
    2.Uniformity
    3.Deleted Neighbors
    4.Voronoi Regions
 During the last month we have worked on a contraction model,where 

the nodes are ordered according to their degrees,and then contracted.We
have defined levels of contraction as follows:
Level 0: Removal of all one degree nodes.
Level 1:Removal of all two degree nodes.
I implemented a cpp class which contracts the graph,given the level
of contraction.I also implemented cpp classes for dijkstra and
astar,which extend the above class,and run the algorithm on the
contracted graph.These classes can be integrated with pgrouting.The
process is as follows:

1.Contract the graph to the given level.
2.Run dijkstra/astar on the contracted graph.
3.Unpack the path,and derive the original path.

I have tested it for some test cases and it works fine.We actualIy
are not sure of the exact contraction model,but inorder to test we have
taken up the above contraction model.I am also planning to implement a
query for the same in postgresql.Can you help me, how to proceed?

Regards,
Rohith Reddy.

Hi Rohith,

This sounds like excellent work the you guys have been doing. Regarding integrating this into postgresql and want to bring Vicky into the discussion because she has reworked the C code that is need to some easy to use templates and she has done a lot of work in build reusable C++ classes for Dijkstra based on Boost Graph.

The integration into postgresql follows these steps:

  1. define a plpgsql function(s) that is callable from SQL. This is a little be more difficult on the case of contraction hierarchies because we need to a) contract the graph and store it, b) recall the contracted graph and run query(s) against it. So we have to consider how and where to store the contracted graph.
  2. given a plpgsql signature, we need to write a C function that processes the arguments of the function into a C structs and calls a C++ wrapper to your code.
  3. we need a C++ wrapper, that handles taking the C structs passed to it, create an instance of you solver or contractor, and loads the data into the solver or contractor from the C structs, runs the solver or contractor, and finally returns the results back to the C code which in turn returns it back to the SQL code.

So you might have some hypothetical commands like:

insert into contracted_graphs
  select 'my_contracted_graph_name'::text, contracted_blob::bytea
    from pgr_contractgraph(
        sql_for_edges::text,
        sql_for_turn_restriction::text, -- if you support them
        contraction_level::integer default 0, -- or whatever
        ... -- other options or data you might need
    );

where sql_for_edges might look like:

'select edge_id, nodeid_a, nodeid_b, weight_forward, weight_reversed, ...
  from edge_table'

We might store contracted blobs in a table like:

create table contracted_graphs (
    name text,
    blob bytea
);

Issues to be sorted out:

  1. what is the best way to store large blobs of data?
  2. should we look at the contracted graph as a blob or is it better viewed as structured data for the purpose of storing it in the database.
  3. would it be faster to store the data in file(s) on disk and only remember the filenames in the database?
  4. if in files can multiple simultaneous queries access the files without issues?

Remember, multiple user database means NO globals. postgresql is a long running server process so NO memory leaks. etc.

The use here is performance of serializing the contracted graph and store it in the database and then later retrieving it and unserializing it. Postgresql has some performance issues around fetching and storing large blob objects that we would need to look at. Also I have not ever done this in the past so some investigation will need to be done.

For the actual route query, you should model the SQL functions and code to act like the new code Vicky has done for pgr_dijkstra() and I will let her add links to that below. But this case, rather than passing the edges in you would pass in the contracted graph, and the route query parameters and you would want to return results structured as that code does. Please look at her new code NOT the 2.0 code.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4