Hello, Now that my code has started to work in the memory-only mode of IBR-DTN I went on to testing it with the disk-based storage, which I assume I activated because I entered in some directories for the bundles to be stored in, in the config file. IBR-DTN also acknowledged that it was using simple storage. When I started it though, the program basically froze the first time I tried to interact with a bundle, whereas it was fine in memory only mode. The conditions that caused this were as follows:
1. I got a bundle from storage in the queuebundleevent, checked if it was a bundle I put in though dtnsend, and forward it to an event handler in the router. 2. I again check for some more bundle issues in the event handler, then send the bundle forward again to another function that will basically split up the bundle 3. In my function I attempt to create a DefaultSerializer using a blank blob reference and stream the bundle into it.
The last step is where my function is failing. The output is giving me an exception that says that the output stream went bad and 0 out of the 49000 bytes in my original bundle were read.
Any ideas as to why this is happening only in the simple storage and not memory only? Below is a code segment that shows what i am doing. Thank you.
Carson Dunbar
u_int32_t NCRoutingExtension::generate_and_enqueue_encodings(dtn::data::Bundle bundle, u_int32_t max_chunk_size) { // start with just the standard basis, i.e. fragmentation
// need to figure out the number of chunks that will be created // still requiring the number of chunks to be a multiple of 8? // let's try to not require it // dtn::core::BundleStorage &storage = (**this).getStorage(); u_int32_t num_chunks, chunk_size;
size_t orig_bundle_len = 0; size_t block_len; ibrcommon::BLOB::Reference ref = ibrcommon::BLOB::create(); dtn::data::DefaultSerializer serializer(*ref.iostream()); try { // activate exceptions for this method if (!ref.iostream()->good()) throw ibrcommon::IOException("stream went bad"); serializer << (bundle); // flush the stream (*ref.iostream()) << std::flush;
} catch (const ibrcommon::Exception &ex) { IBRCOMMON_LOGGER_DEBUG(10) << ex.what() << IBRCOMMON_LOGGER_ENDL; throw; }
Hello Carson.
I think there is a mistake in the usage of the iostream element of the BLOB. The iostream is used to lock the BLOB and open/close the corresponding file descriptor. You need to create an iostream object for the whole time you are working with the stream. In your code you create and destroy the stream in the same operation.
This code is not tested, but something like that should work:
ibrcommon::BLOB::Reference ref = ibrcommon::BLOB::create(); ibrcommon::BLOB::iostream ios = ref.iostream(); dtn::data::DefaultSerializer serializer(*ios);
try { // activate exceptions for this method if (!ios->good()) throw ibrcommon::IOException("stream went bad"); serializer << (bundle); // flush the stream (*ios) << std::flush;
} catch (const ibrcommon::Exception &ex) { IBRCOMMON_LOGGER_DEBUG(10) << ex.what() << IBRCOMMON_LOGGER_ENDL; throw; }
Kind regards, Johannes
Am 27.08.2012 22:00, schrieb Carson Dunbar:
u_int32_t NCRoutingExtension::generate_and_enqueue_encodings(dtn::data::Bundle bundle, u_int32_t max_chunk_size) { // start with just the standard basis, i.e. fragmentation
// need to figure out the number of chunks that will be created // still requiring the number of chunks to be a multiple of 8? // let's try to not require it // dtn::core::BundleStorage &storage = (**this).getStorage(); u_int32_t num_chunks, chunk_size;
size_t orig_bundle_len = 0; size_t block_len; ibrcommon::BLOB::Reference ref = ibrcommon::BLOB::create(); dtn::data::DefaultSerializer serializer(*ref.iostream()); try { // activate exceptions for this method if (!ref.iostream()->good()) throw ibrcommon::IOException("stream went bad"); serializer << (bundle); // flush the stream (*ref.iostream()) << std::flush;
} catch (const ibrcommon::Exception &ex) { IBRCOMMON_LOGGER_DEBUG(10) << ex.what() << IBRCOMMON_LOGGER_ENDL; throw; }
Thank you for that advice, it helped me get past the catch and the stream error in the code I posted previously. Unfortunately, when I try this, the stream is still not reading anything from the bundle in disk based storage, while it works fine in memory based. I have a line that outputs the gcount() output after a read from the ios and in memory based storage I'm getting out
Wed Aug 29 11:09:29 2012 Timestamp: 1346252969.395592 DEBUG.5: orig_bundle_len=4257161 bytes_remaining=4257161 chunk_size=49502 len=49502 gcount 49502
whereas in the disk-based I get
Wed Aug 29 11:10:21 2012 Timestamp: 1346253021.760861 DEBUG.5: orig_bundle_len=4257161 bytes_remaining=4257161 chunk_size=49502 len=49502 gcount 0
This is causing my code to exit at an assert that makes sure that I'm reading a specific amount of data. In addition to this, I check the output of the variable that I'm reading into and there is no changes to the data that can be seen in an output line. Below is the code that I'm using to read the data
IBRCOMMON_LOGGER(info) << "Generated new collection UUID: " << encoding_set_id.to_string() << IBRCOMMON_LOGGER_ENDL; IBRCOMMON_LOGGER(info) << "Producing bundle chunks..." << IBRCOMMON_LOGGER_ENDL; for (c_num = 0; c_num < num_chunks; c_num++) { chunks[c_num] = new u_char[chunk_size]; u_char *chunk_data = chunks[c_num]; size_t offset = orig_bundle_len - bytes_remaining; size_t len = chunk_size; if (bytes_remaining < chunk_size) { len = bytes_remaining; } ios->read((char*)chunk_data, len); if(bytes_remaining < chunk_size) { for(int i = len; i < chunk_size; i++) { chunk_data[i] = 0; } complete = true; } //size_t bytes_produced = BundleProtocol::produce(bundle, blocks, chunk_data, // offset, len, &complete); IBRCOMMON_LOGGER_DEBUG(5) << "orig_bundle_len=" << orig_bundle_len << " bytes_remaining=" << bytes_remaining << " chunk_size=" << chunk_size << " len=" << len << " gcount " << ios->gcount() << IBRCOMMON_LOGGER_ENDL; assert(len == ios->gcount());
The last line is where I am currently getting stuck at. Any ideas as to why? Thanks.
Carson
On Wed, Aug 29, 2012 at 4:36 AM, Johannes Morgenroth < morgenro@ibr.cs.tu-bs.de> wrote:
Hello Carson.
I think there is a mistake in the usage of the iostream element of the BLOB. The iostream is used to lock the BLOB and open/close the corresponding file descriptor. You need to create an iostream object for the whole time you are working with the stream. In your code you create and destroy the stream in the same operation.
This code is not tested, but something like that should work:
ibrcommon::BLOB::Reference ref = ibrcommon::BLOB::create(); ibrcommon::BLOB::iostream ios = ref.iostream(); dtn::data::DefaultSerializer serializer(*ios);
try { // activate exceptions for this method if (!ios->good()) throw ibrcommon::IOException("stream went bad"); serializer << (bundle); // flush the stream (*ios) << std::flush;
} catch (const ibrcommon::Exception &ex) { IBRCOMMON_LOGGER_DEBUG(10) << ex.what() << IBRCOMMON_LOGGER_ENDL; throw; }
Kind regards, Johannes
Am 27.08.2012 22:00, schrieb Carson Dunbar:
u_int32_t NCRoutingExtension::generate_and_enqueue_encodings(dtn::data::Bundle bundle, u_int32_t max_chunk_size) { // start with just the standard basis, i.e. fragmentation
// need to figure out the number of chunks that will be created // still requiring the number of chunks to be a multiple of 8? // let's try to not require it // dtn::core::BundleStorage &storage = (**this).getStorage(); u_int32_t num_chunks, chunk_size;
size_t orig_bundle_len = 0; size_t block_len; ibrcommon::BLOB::Reference ref = ibrcommon::BLOB::create(); dtn::data::DefaultSerializer serializer(*ref.iostream()); try { // activate exceptions for this method if (!ref.iostream()->good()) throw ibrcommon::IOException("stream went bad"); serializer << (bundle); // flush the stream (*ref.iostream()) << std::flush;
} catch (const ibrcommon::Exception &ex) { IBRCOMMON_LOGGER_DEBUG(10) << ex.what() << IBRCOMMON_LOGGER_ENDL; throw; }
-- Johannes Morgenroth Institut fuer Betriebssysteme und Rechnerverbund Tel.: +49-531-391-3249 Muehlenpfordtstrasse 23 Fax.: +49-531-391-5936 TU Braunschweig D-38106 Braunschweig
participants (2)
-
Carson Dunbar
-
Johannes Morgenroth