Hello,
basically we used dtnsend and dtnrecv with some custom scripting glue around it. As of yet there is no standalone dtnperf tool for IBR-DTN, but we are always happy to receive contributions :)
IBR-DTN defaults to memory storage if neither blob_path nor storage_path are defined in the configuration file. In this case may want to limit the memory IBR-DTN will grab for its storage by something like
limit_storage = 80M
Recently there has been some more work by other groups to assess BP implementation performance. This work tries to use a more realistic scenario http://dl.acm.org/citation.cfm?id=2348624 and some work focussing on an embedded scenario http://dl.acm.org/citation.cfm?id=2348634
For any IBR-DTN related questions you might want to consider to join the IBR-DTN mailing list https://mail.ibr.cs.tu-bs.de/mailman/listinfo/ibr-dtn
MfG
Sebastian
Am 12.09.2012 um 19:20 schrieb Muri, Paul (GSFC-450.0)[GSFC - HIGHER EDUCATION]:
Hi all,
I'm seeing similar results as your paper, "Performance Comparison of DTN BP Implementations" running the dtn2 implementation with the built-in dtnperf tool between Gbit NICs on 2 machines. I was wondering what tools (dtnperf, dtnsend/dtnrecv) for each implementation were run to obtain the throughput results? Also, how can dtn2 and IBR-DTN be configured to use memory based backends?
Much thanks, Paul
-- Paul Muri NASA GSFC/University of Florida (954) 605-1989