Hello Johannes,

Thanks a lot for your detailed explanation.

I didn't know that client and daemon can work on different machines. How do client applications
discover server address in case if DTN daemon is on another machine? Among dtnsend and
dtnrecv command line options I see only "-U" for UNIX domain sockets, which as far as I know
work only on localhost.

Both /var/tmp and  /var/spool/dtnd are on the same disk partition which has 6.4Gb free.

Best Regards,
Sergey Syreskin


Mon, 08 Oct 2012 09:36:41 +0200 от Johannes Morgenroth <morgenro@ibr.cs.tu-bs.de>:
Hello Sergey,

to understand your issue completely I need some more details.

Is /var/tmp a harddrive or a virtual drive based on RAM?

How much space is available in these paths?
 - /var/tmp
 - /var/spool/dtnd


You are right regarding the copy behavior in IBR-DTN. There are several
reasons to process the bundles that way. But first you have to
understand the classical client-server model [1] which is also valid for
DTN daemon and DTN clients (such as dtnrecv).

[1] http://en.wikipedia.org/wiki/Client%E2%80%93server_model

Each client is connected to the daemon using a TCP channel. There is no
assumption of shared disk space. That allows to split-up daemon and
client on different machines if necessary and (much more important) a
clean software design without permission issues during deployment on
various platforms. Further that means, it is impossible to just move the
bundle in the last delivery step to another location because you cannot
just move a file through a TCP connection. Even if that would be
possible, it would break the possibility to deliver multiple copies of
the same bundle to different clients.

The copying inside of the daemon between blob and bundles path is also
required, because the daemon needs copies of the bundles to work with.
Think of workspace if there are bundles in the "blob"-path and bundles
in the "bundles"-path are in some kind of long-term storage. Each time
the daemon should deliver a bundle to a client or receive some data a
bundle is copied to the volatile BLOB storage. If the store process put
a bundle into the storage and no process owns a pointer to the bundle
the BLOB file will disappear.

However, the SQLite version of the storage already has an improved
mechanism which works with hardlinks instead copies which might be much
faster in your setup.


Kind regards,
Johannes Morgenroth


Am 04.10.2012 12:06, schrieb Sergey Sireskin:
> Hello, IBR-DTN developers and users!
>
> I suspect that IBR-DRN does excessive work on copying files while doing
> dtnrecv.
>
> Here is what I do.
> 1. On the node2 I start dtnrecv --file /var/tmp/1.iso --name file
> 2. On the node1 I start dtnsend dtn://node-2.dtn/file /media/1.iso
>
> Then in another console on node2 I look, what does IBR-DTN do.
> 1. It stores the data, received, from node1 to a blob file in
> /var/spool/dtnd/blobs/
> 2. Than it copies that blob file to a bundle file in
> /var/spool/dtnd/bundles/
> 3. Than it copies the bundle file to the destination file /var/tmp/1.iso
>
> Steps 1 and 2 look ok to me. But why use bit-by-bit copy on step 3
> instead of just move (mv)
> the bundle file to the destination file without the expensive data transfer?
> This excessive data copying consumes memory to store the whole bundle
> and takes more time.
>
> My node2 has 2Gb RAM. The file that I send (1.iso) is 1.2Gb. RAM gets
> exhausted on the step 3,
> and dtnrecv fails with error message "Aborted".
>
> So I have two proposals.
> 1. Implement better error reporting, e.g. "Not enough memory to copy
> file." instead of "Aborted."
> 2. Move bundle file to the destination file instead of copying it.
>
> -------------------------------------------------------------------------------------------------------------------------------
> My IBR-DTN config:
> local_uri = dtn://node-2.dtn
> logfile = /var/log/ibrdtn.log
> timezone = +4
> limit_blocksize = 0
> user = dtnd
> blob_path = /var/spool/dtnd/blobs
> storage_path = /var/spool/dtnd/bundles
> limit_storage = 5G
> discovery_announce = 0
> net_interfaces = eth0
> net_rebind = yes
> net_autoconnect = 60
> net_eth0_type = tcp
> net_eth0_interface = eth0
> net_eth0_port = 4556
> net_eth0_discovery = yes
> routing_forwarding = yes
> static1_address = 192.168.150.38
> static1_port = 4556
> static1_uri = dtn://node-1.dtn
> static1_proto = tcp
> static1_immediately = yes
> dht_enabled = yes
> dht_bootstrapping = yes
>
> Best Regards,
> Sergey Syreskin
>
>
> --
> !! This message is brought to you via the `ibr-dtn' mailing list.
> !! Please do not reply to this message to unsubscribe. To unsubscribe or adjust
> !! your settings, send a mail message to <ibr-dtn-request@ibr.cs.tu-bs.de>
> !! or look at https://www.ibr.cs.tu-bs.de/mailman/listinfo/ibr-dtn.
>


--
Johannes Morgenroth Institut fuer Betriebssysteme und Rechnerverbund
Tel.: +49-531-391-3249 Muehlenpfordtstrasse 23
Fax.: +49-531-391-5936 TU Braunschweig D-38106 Braunschweig