Also end URIs with '/' where appropriate.
Refrain from touching scripts/ and Stephen Crane's LWP authorship
note.
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
Partly inspired by GNU coreutils HACKING[*].
doc/coding section "git" is now redundant, except for the note on
avoiding whitespace changes. Move that to section "Code formatting",
and delete section "git".
[*] http://git.savannah.gnu.org/cgit/coreutils.git/plain/HACKING?id=77da73c
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
nstr_parse_val() interprets argument "" as (empty) identifier, then
returns a pointer right beyond the end of the string.
The argument points into player->argbuf[]. If another argument
follows the conditional, it gets appended to the conditional. Else,
whatever's left there from previous commands gets appended. If the
argument is at the very end of player->argbuf[], we parse beyond the
buffer, until we run into a syntax error, or a zero byte.
Since player->argbuf[] is followed by a bunch of pointers, a syntax
error is almost certain. If we somehow manage to parse all the
pointers and player->lasttime, the runaway parse will end at
player->btused, because that's definitely zero when conditionals get
parsed.
Combine the two loops looking for \*QNAME\*U and "info NAME" into one.
Recognize "info NAME" only with the closing '"' to be present.
No change with current info sources.
The fire command requires at least one gun for depth charging, but
missions and return fire don't. Has always been that way, except
between 4.0.6 and commit a3ad623 (v4.3.12), when depth charging worked
exactly like gun fire, and guns were consistently required.
Require guns for all ways to drop depth charges.
So you don't have to micromanage workers to maximize useful work.
The previous commit made the problem a bit worse. If you had a few
workers too many before, you perhaps produced an extra unit. Now, you
get to keep the extra work instead. Useless, unless it rolls over.
produce() limits production to how many units the workers can produce,
rounding randomly. It charges work for the units actually produced,
discarding fractions.
If you get lucky with the random rounding, you may get a bit of extra
work done for free. Else, you get to keep the unused work, and may
even be undercharged a tiny bit of work. Has always been that way.
The production command assumes the random rounding rounds up if and
only if the probability to do so is at least 50%. Thus, it's
frequently off by one for sectors producing at their worker limit.
The budget command runs the update code, and is therefore also off by
one, only differently.
Rather annoying for tech and research centers, where a single unit
matters. A tech center with full civilian population can produce 37.5
units in 60 etus. Given enough materials, it'll fluctuate between 37
and 38. Production consistently predicts 38, and budget randomly
predicts either 37 or 38. Both are off by one half the time.
Fix this as follows: limit production to the amount the workers can
produce (no rounding). Work becomes a hard limit, not subject to
random fluctuations. Randomly round the work charged for actual
production. On average, this charges exactly the work that's used.
More importantly, production and budget now predict how much gets
produced more accurately. They're still not exact, as the amount of
work available for production remains slightly random.
This also "fixes" the smoke test on a i686 Debian 6 box for me. The
root problem is that floating-point subexpressions may either be
computed in double precision or extended precision. Different
machines (or different compilers, or even different compiler flags)
may use different precision, and get different results.
Example: producing 108 units at one work per unit, sector p.e. 0.4
needs to charge 108 / 0.4 work. Computed in double precision, this
gets rounded to 270.0, then truncated to 270. In 80 bit extended
precision, it gets rounded to 269.999999999, then truncated to 269.
With random rounding instead of truncation, the probability for a
different result is vanishingly small. However, this commit
introduces truncation in another place. It just happens not to mess
up the smoke test there. I doubt this is the last time this kind of
problem upsets the smoke test.
We can make actual = roundavg(material_consume * prodeff) products.
When we reduce actual, we have to reduce material_consume, too. Code
does that like this:
material_consume = roundavg(actual' * material_consume / actual)
Double rounding. Do this instead:
material_consume = roundavg(actual' / prodeff)
Broken in commit 3a7d7fa, which enlarged struct natstr member
nat_hostaddr[] from 32 to 46 characters, but neglected to update the
ca_len in nat_ca[]. Consequently, the address is truncated in xdump.
Can also break country * ?ip=... and such, but that's exotic.
emp_server and empdump refuse to start on most big endian hosts,
because ef_verify_config() chokes on mdchr_ca[]:
Config meta uid 0 field type: value 0 is not in symbol table meta-type
Config meta uid 1 field type: value 0 is not in symbol table meta-type
Config meta uid 2 field type: value 0 is not in symbol table meta-type
Config meta uid 3 field type: value 0 is not in symbol table meta-type
Config meta uid 4 field type: value 0 is not in symbol table meta-type
Broken in commit 06a0036 (v4.3.12), which changed struct castr member
ca_type from packed_nsc_type (typedef'ed to char) to enum nsc_type,
but neglected to update the ca_type in mdchr_ca[].
On little endian hosts, the selector reads the least significant byte,
with sign extension. Happens to work, because the type values are all
sufficiently small integers.
On big endian hosts, the selector reads the most signiciant byte.
which is always zero (NSC_NOTYPE). Makes ef_verify_config() fail.
Except when sizeof(enum nsc_notype) == 1. Then selector type works
fine, and ef_verify_config() succeeds, but we run into the next
problem: the same commit also changed member ca_flags from nsc_flags
(typedef'ed to unsigned char) to int without updating the ca_type in
mdchr_ca[]. This breaks "only" xdump meta column flags.
v4.3.12 was released in April 2008. Either nobody has tried to run a
game on a big endian host since, or all who did gave up quietly,
without reporting the problem.
We clearly need to test on a wider range of machines.
Broken in commit 14ea670 (v4.3.8), which changed struct trdstr member
trd_type from char to short, but neglected to update the ca_type in
trade_ca[].
On little endian hosts, the selector reads the least significant byte,
with sign extension. Happens to work, because the type values are all
sufficiently small integers.
On big endian hosts, the selector reads the most signiciant byte,
which is always zero (EF_SECTOR). Messes up xdump trade badly.
Broken in commit 09248d0 (v4.3.8), which changed struct loststr member
lost_type from char to short, but neglected to update the ca_type in
lost_ca[].
On little endian hosts, the selector reads the least significant byte,
with sign extension. Happens to work, because the type values are all
sufficiently small integers.
On big endian hosts, the selector reads the most signiciant byte,
which is always zero (EF_SECTOR). Messes up xdump lost badly. Also
breaks lost * ?type=..., but that's exotic.
"Unusually long" topics are marked with a "!" in subject indexes.
This should use the line count of the formatted page, but that's too
much trouble, so commit 4c0b4c0 (v4.3.27) approximated it by "source
file has more than 9999 bytes". Change that to "source file has more
than 300 lines".
Since subjects were added in Empire 2, we've always picked them up
from .SA requests. If you mistype a subject there, you get a "is a
NEW subject" warning, and incorrect subject pages. When building a
pristine tree, you get bogus "is a NEW subject" warnings for all
subjects. If you somehow delete the generated subjects.mk, but not
the generated subject files, the build breaks.
Declare subjects in Make variable subjects. Drop generated makefile
subject.mk.
Treat unknown topics in .SA arguments as errors. This replaces the
"$subj is a NEW subject" warning.
Treat subjects without member pages as errors. This replaces the "The
subject $subj has been removed" warning.
Safer and simpler.
We used to do all the info indexing work in info.pl: find subjects,
create subjects.mk (to tell make the list of subjects), the subject
pages, and TOP.t. Worked, but touching an info page triggered a full
rebuild of all subject pages and TOP.t.
Commit 2f14064 (v4.3.0) tried to avoid that by splitting info.pl into
findsubj.pl, mksubj.pl, mktop.pl. findsubj.pl puts not just the
subjects into subjects.mk, but also make rules for the subject pages,
to guide their remaking. mksubj.pl creates a single subject page.
mktop.pl creates TOP.t.
Unfortunately, this doesn't work so well. Since subjects.mk doesn't
exist in a virgin tree, we use -include. Unwanted consequence:
findsubj.pl failure doesn't stop make. Moreover, the complex make
machinery breaks down when info sources get removed or subjects get
dropped.
Go back to the old method, except keep mktop.pl separate, as that part
works just fine, and use simpler make rules. mksubj.pl now creates
subjects.mk and all subject pages, like info.pl did.
This effectively reverts most of commit 2f14064. I'll address the
excessive rebuilding of subject pages in a different way shortly.
Remaking config.h and config.h.in updates the target only when its
contents actually changes. This is important, because after updating
config.h we need to recompile everything.
The make rules to do that are straight from the Autoconf manual. But
they don't work: the rules that connect config.h and config.h.in to
stamp-h and stamp-h.in don't have recipes. Since make doesn't have an
implicit rule either, it concludes that the target remains unchanged.
It still updates the prerequisites. The recipe for updating the stamp
files then change the the targetes behind make's back. Make misses
the change of config.h and/or config.h.in, and we get an incomplete
rebuild.
The rules need empty recipes instead. This Autoconf manual was fixed
accordingly in autoconf.git commit 6b42b38.
Mark obsolete pages with '+' in subject pages. Drop the separate
"Obsolete" subject page: move "info Innards" to subject "Server", and
"info update" to "Updates" (where it came from in commit a5764534,
v4.3.10).