Monthly Archives: January 2011

Automatic Nomenclature in Latex

Dear Reader,

You will doubtless have struggled as I have with ensuring that your nomenclature section is up-to-date and comprehensive.  Normally this involves printing out the document and ticking off the symbols in the nomenclature as you go through the document symbol by symbol.  The sort of dreary drudgery that computers should take care of.  A short web search reveals that they do.

The nomecl package provides a good solution.  Essentially you add nomenclature information into the document where you first use the symbol and it then compiles it into a nomenclature section.  There are a couple of things to watch out for though.

Your latex document needs to have \usepackage{nomencl} in the preamble and a \makenomenclature where the \maketitle command usually goes (well this worked for me anyway) you need to put \printnomenclature where you would like the nomenclature section to appear.

After you first use the symbol you want in the nomenclature commands such as: \nomenclature{$C_{P0}$}{Loss coefficient}% are required. Note the trailing % sign.

In order to actually get a nomenclature you need to run makeindex (thanks for this tip from Ulrike Fischer):

# pdflatex draftreport
# makeindex draftreport.nlo -s nomencl.ist -o draftreport.nls

Did the job for me. Further details are of course in the fine manual which you should probably read…..

Advertisements

Spreadsheets are Evil

Today I had the dubious pleasure of processing data in a spreadsheet.  I stopped doing this in 2002 and started using programs to process data but this data was given to me by a colleague (who is one of the sharpest people I have the pleasure of working with) and was already half processed.  I spent half the day wondering if it would be quicker to go back to square one and write a script.  However I carried on with the spreadsheet which led me to come to the conclusion that spreadsheets are evil.

Despite this most professional engineers use them as often as they would use a bit of paper, but I’ll get back to why they are evil:

  • There is no logical flow to the calculations – it is like following a program with huge numbers of goto statements – like trying to follow directions on a tube map of London with a teleport function
  • The variable names aren’t meaningful A23 is pressure, where is B23 is velocity
  • The data and the manipulation logic are intimately linked.  If you want to repeat the operation on another set of data you need to copy and paste and if you have more data points then you have to make sure all the cell limits match up.

All this means that spreadsheets are extremely error prone.  For example the one I used today put together by one of the sharpest tools in the box had one serious systematic error in it and one quite minor one.  Errors arise in programming environments too but then you can change the code and run it on the data again – fixing it in the spreadsheet was time consuming as I had to go through each set of data and correct the analysis error in each sheet.

Despite this they are extremely popular amongst my fellow engineers because they allow you to immediately adjust, alter and plot data.  Although better alternatives exist that have much more sophisticated possibilities such as: GNU Octave or SciPy I guess almost any computer has a spreadsheet on it.  When I worked for a prominent North of Scotland power generating company back in the day analysis was largely done in spreadsheets as installing software had a large overhead associated with it.  There was some massive spreadsheet that worked out the efficiency of the power station but no-one knew how it worked and I spent a few weeks implementing a pipe-flow analysis program in Lotus 1-2-3 – all of which would have been much easier with an actual programming language.

However I have learned the error of my ways – spreadsheets are evil – avoid them!