
Hi!
More than a year has passed since the 0.4.1 release of libsmi. On a number of requests during recent days and during the past months :-), I've just fixed up the test suite and bundled a new release: 0.4.2. It contains numerous bug fixes, newly published or updated Std MIBs, new checks in smilint and smidump, and updated smidump drivers.
As usual, you can find it at
http://www.ibr.cs.tu-bs.de/projects/libsmi/ ftp://ftp.ibr.cs.tu-bs.de/pub/local/libsmi/
CVS users, please note that the anoncvs server, which has been down for some time, is up and running again. However, I'll leave the daily CVS snapshots on the FTP server for those folks who sit behind firewalls and have problems to access the CVS server.
Enjoy,
-frank

Frank,
Frank Strauß wrote:
Hi!
More than a year has passed since the 0.4.1 release of libsmi. On a number of requests during recent days and during the past months :-), I've just fixed up the test suite and bundled a new release: 0.4.2. It contains numerous bug fixes, newly published or updated Std MIBs, new checks in smilint and smidump, and updated smidump drivers.
The parser tests in 0.4.1 all passed after my fixes to parser.test.in, but now I am getting failures on lots of them. I took a look at the output for test 2 and it looks like my compiled version was reporting more warnings than expected, but no more errors. Could you help me verify what is going on here? See the attached parser.out directory.
Harold
Checking LIBSMI-TEST-001-MIB. 0 errors/warnings, ok. Checking LIBSMI-TEST-002-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-003-MIB. 2 errors/warnings, ok. Checking LIBSMI-TEST-004-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-005-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-006-MIB. 1 errors/warnings, ok. Checking LIBSMI-TEST-007-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-008-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-009-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-010-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-011-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-012-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-013-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-014-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-015-MIB. 0 errors/warnings, ok. Checking LIBSMI-TEST-016-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-017-MIB. unexpected output. see parser.out directory. FAIL: parser.test

Frank,
Could you confirm whether or not the final test in 'make check' completes on your system for the 0.4.2 release? I sent in good notes below but haven't gotten a response yet confirming whether this is common or something specific to Cygwin.
Harold
Harold L Hunt II wrote:
Frank,
Frank Strauß wrote:
Hi!
More than a year has passed since the 0.4.1 release of libsmi. On a number of requests during recent days and during the past months :-), I've just fixed up the test suite and bundled a new release: 0.4.2. It contains numerous bug fixes, newly published or updated Std MIBs, new checks in smilint and smidump, and updated smidump drivers.
The parser tests in 0.4.1 all passed after my fixes to parser.test.in, but now I am getting failures on lots of them. I took a look at the output for test 2 and it looks like my compiled version was reporting more warnings than expected, but no more errors. Could you help me verify what is going on here? See the attached parser.out directory.
Harold
Checking LIBSMI-TEST-001-MIB. 0 errors/warnings, ok. Checking LIBSMI-TEST-002-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-003-MIB. 2 errors/warnings, ok. Checking LIBSMI-TEST-004-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-005-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-006-MIB. 1 errors/warnings, ok. Checking LIBSMI-TEST-007-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-008-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-009-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-010-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-011-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-012-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-013-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-014-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-015-MIB. 0 errors/warnings, ok. Checking LIBSMI-TEST-016-MIB. unexpected output. see parser.out directory. Checking LIBSMI-TEST-017-MIB. unexpected output. see parser.out directory. FAIL: parser.test

Harold L Hunt II wrote:
Frank,
Could you confirm whether or not the final test in 'make check' completes on your system for the 0.4.2 release? I sent in good notes below but haven't gotten a response yet confirming whether this is common or something specific to Cygwin.
I just reran the test suite and all the parser tests just complete fine, at least on my x86 Debian GNU/Linux platform. (However, I have ongoing trouble with the smidump-cm test which generates varying floating point numbers, but that's another story).
Can anyone else on this list confirm any of our - good or failing - observations on the last test of the "make check" test suite in 0.4.2?

=?ISO-8859-1?Q?Frank Strau=DF?= writes:
Frank> Can anyone else on this list confirm any of our - good or Frank> failing - observations on the last test of the "make check" Frank> test suite in 0.4.2?
I just compiled on "SunOS thesun 5.8 Generic_108528-15 sun4u sparc SUNW,Sun-Fire-280R" and I also see that a several tests fail. It turns out that the order of some of the identifiers is different on these platforms. For example, smidump -f metrics on Debian Gnu/Linux produces something like
MODULE TYPE EXT-USAGE Integer32 31.6% SNMPv2-SMI Counter32 31.6% SNMPv2-TC AutonomousType 15.8% IF-MIB InterfaceIndex 10.5% SNMPv2-SMI Counter64 5.3% SNMPv2-TC TruthValue 5.3%
while the Solaris box produces
MODULE TYPE EXT-USAGE Integer32 31.6% SNMPv2-SMI Counter32 31.6% SNMPv2-TC AutonomousType 15.8% IF-MIB InterfaceIndex 10.5% SNMPv2-TC TruthValue 5.3% SNMPv2-SMI Counter64 5.3%
Note that the last two lines are swapped and that this list is sorted by the percentages. The code just calls qsort() and I guess this is why see these differences since qsort implementations differ in how they order identical values...
/js

Juergen Schoenwaelder wrote:
=?ISO-8859-1?Q?Frank Strau=DF?= writes:
Frank> Can anyone else on this list confirm any of our - good or Frank> failing - observations on the last test of the "make check" Frank> test suite in 0.4.2?
I just compiled on "SunOS thesun 5.8 Generic_108528-15 sun4u sparc SUNW,Sun-Fire-280R" and I also see that a several tests fail. It turns out that the order of some of the identifiers is different on these platforms. For example, smidump -f metrics on Debian Gnu/Linux produces [...] Note that the last two lines are swapped and that this list is sorted by the percentages. The code just calls qsort() and I guess this is why see these differences since qsort implementations differ in how they order identical values...
I just ran the test suite on our Sparc/Solaris box and got 4 failed tests: this "metrics" problem, two segfaults ("xml" and "python"), and the varying floating point values in the "cm" case. But still no problems with the "parser" test ("PASS: parser.test").

=?ISO-8859-1?Q?Frank Strau=DF?= writes:
Frank> I just ran the test suite on our Sparc/Solaris box and got 4 Frank> failed tests: this "metrics" problem, two segfaults ("xml" and Frank> "python"), and the varying floating point values in the "cm" Frank> case.
I have checked a patch into the CVS which should fix the sorting differences in the metrics driver.
/js

Juergen Schoenwaelder wrote:
Frank> I just ran the test suite on our Sparc/Solaris box and got 4 Frank> failed tests: this "metrics" problem, two segfaults ("xml" and Frank> "python"), and the varying floating point values in the "cm" Frank> case.
I have checked a patch into the CVS which should fix the sorting differences in the metrics driver.
Fine. I've looked at the new output and commited it the expected "metrics" template directory. The test completes now successfully on Linux as well as on Solaris.

Just FYI, I just announced this to my local Python users group.
Greetings all. I just recently re-wrote the Python wrapper classes for the libsmi library. The libsmi (a C library for parsing and accessing SMI/MIB data) has iterator functions that are now mapped to Python generators. I used SWIG to get the basic wrapper, and then modified that.
Let me know if you want more information.
PS. Currently only available at http://sourceforge.net/projects/pynms using the anonymous CVS access method.
libsmi home page:
http://www.ibr.cs.tu-bs.de/projects/libsmi/
The wrapper should be complete, let me know if you find it is not.
Also, there are two unit test modules for it: test_SMI, and test_libsmi. This is because there are actually two modules you can use. the "stock" generated one from SWIG, and my hand-modified one.
Once you get pyNMS installed, run the unit tests as follows:
qaunittest test_SMI
and
qaunittest test_libsmi
Note that the unit test modules are not complete yet.
Also note that the modified wrapper has a different naming style. One that I prefer and seems to be more common in the Python world.

Keith,
That is great news.
But, I am a little confused as to why I was cc'd on this. You don't seem to be requesting that I do anything for the Cygwin libsmi package...
Harold
Keith Dart wrote:
Just FYI, I just announced this to my local Python users group.
Greetings all. I just recently re-wrote the Python wrapper classes for the libsmi library. The libsmi (a C library for parsing and accessing SMI/MIB data) has iterator functions that are now mapped to Python generators. I used SWIG to get the basic wrapper, and then modified that.
Let me know if you want more information.
PS. Currently only available at http://sourceforge.net/projects/pynms using the anonymous CVS access method.
libsmi home page:
http://www.ibr.cs.tu-bs.de/projects/libsmi/
The wrapper should be complete, let me know if you find it is not.
Also, there are two unit test modules for it: test_SMI, and test_libsmi. This is because there are actually two modules you can use. the "stock" generated one from SWIG, and my hand-modified one.
Once you get pyNMS installed, run the unit tests as follows:
qaunittest test_SMI
and
qaunittest test_libsmi
Note that the unit test modules are not complete yet.
Also note that the modified wrapper has a different naming style. One that I prefer and seems to be more common in the Python world.

On Sun, 2003-12-21 at 21:30, Harold L Hunt II wrote:
Keith,
That is great news.
But, I am a little confused as to why I was cc'd on this. You don't seem to be requesting that I do anything for the Cygwin libsmi package...
No, sorry about that. Out of sheer laziness I wrote that message by replying to All from another message from the list, and rewrote the subject line. That is all. 8-)

Keith,
Keith Dart wrote:
On Sun, 2003-12-21 at 21:30, Harold L Hunt II wrote:
Keith,
That is great news.
But, I am a little confused as to why I was cc'd on this. You don't seem to be requesting that I do anything for the Cygwin libsmi package...
No, sorry about that. Out of sheer laziness I wrote that message by replying to All from another message from the list, and rewrote the subject line. That is all. 8-)
No problem :)
Harold
participants (4)
-
Frank Strauß
-
Harold L Hunt II
-
Juergen Schoenwaelder
-
Keith Dart