A more complicated situation combines several TeX sources into a single interlinked site consisting of multiple pages and a composite index and bibliography.
Secondly, all xml files must be split and scanned using the command
where DB names a file in which to store the scanned data. Other conversions, including writing the output file, are skipped in this prescanning step.
Finally, all xml files are cross-referenced and converted into the final format using the command
which skips the unnecessary scanning step.
For example, consider a set of nominally stand-alone LaTeX documents:
main (with title page, \tableofcontents
, etc),
A (with a chapter),
Aa (with a section),
B (with a chapter),
…and bib (with a \bibliography
).
Assume that the documents use \lxDocumentID
from \usepackage{latexml}
to declare ids main
, main.A
, \main.A.a
, main.B
,
…bib
, respectively.
And, of course, you’ll have to arrange for appropriate counters to be initialized appropriately,
if needed.
Now, processing the documents with the following commands
This will result in a site built at /site/
, with the following implied structure:
main.html A.html Aa.html B.html ... bib.html