Problem Solving Using ODBC Apps [Part 1]

    By: Paul Vero on Dec 03, 2012

    I often run across some customer problems encountering great difficulty. Sometimes the application is too complex to bring in-house; converting to a simplified test case can consume valuable time and resources. Maybe the customer code is proprietary and security concerns get in the way. I’ve seen the case where we obtain good tracing, but it is still difficult to pinpoint what code is causing the problem or perhaps it’s some third party API or interface, hiding the details of the ODBC API calls. When these scenarios happen I like to take the ODBC trace and create what I call an “ODBC App” out of it. I use a sort of template code and fashion a sequence of API calls that will replicate the same problem. At that point, I can further troubleshoot and debug the problem to seek the root cause of the consternation. I can also mess with the code to discover possible workarounds, try other versions of the ODBC driver and create a library of code I can use later when I run into my next problem where I can unleash my ODBC App.

    My showcase problems are situations that occurred with a very large bank. Applying a fix for a critical problem unveiled two new problems they never saw before. These sorts of problems are regressions, meaning new bugs in code applied to fix a specific problem. Typically, these new bugs are NOT a result of the change to fix the original problem. Most likely the new bugs are introduced by new fixes or feature development. A current test doesn’t encounter it because it may be a unique situation never encountered before. QA testing can’t possibly conceive every possible combination of API calls. Lots of times we see new bugs as a direct result of a developer coding something uniquely, or the ASE behaves in an unorthodox manner. In these cases, the driver coder didn’t anticipate these possibilities and the bug shows up in the customer test. Just think about probability and statistics. As the number of possible combinations of test data and properties increases, the number of possible tests increases as well. I’ve created tests where I test for boundaries of numerical data. I was testing an overflow situation. I had to account for precision (total number of digits) and scale (decimal placement). I had to test up to maximum precision, in between max and min, and very low numbers. Developing the test ended up producing a large number of combinations to test. The possibility of missing a particular condition is a reality we all encounter. I’ll cover the two different problems pointing out my train of thought and how I put together the ODBC App I used for testing the problem.

    Login to read the article. Not a member? Create a free account!

    Read Part 2

    Released: December 3, 2012, 8:25 am | Updated: January 31, 2014, 9:40 pm
    Keywords: ASE Developer Article | Technical Journal | ASE | ASE Developers's Edition | ASE Development | Development | ODBC | Paul Vero | SQL




    Copyright © 2014 ISUG-TECH. All Rights Reserved
    All material, files, logos and trademarks within this site are copyright their respective organizations

    Terms of Service - Privacy Policy - Contact the Help Desk