Unlike POSIX sockets, Windows sockets are not file descriptors, but
"OS handles", with a completely separate set of functions.
However, Windows can create a file descriptor for a socket, and return
a file descriptor's underlying handle. Use that instead of wrapping
our own file descriptors around Windows file descriptors and sockets.
Remove the wrapping machinery: MAX_FDS, enum fdmap_io_type, struct
fdmap, fdmap[], nfd, get_fd(), free_fd(), set_fd(), lookup_handle(),
lookup_fd().
Rewrite SOCKET_FUNCTION(), posix_accept(), posix_socket(),
posix_close(), ftruncate(), posix_open(), posix_read(), posix_write(),
fcntl().
Remove FILE_FUNCTION(), posix_fstat(), posix_lseek(),
SHARED_FUNCTION(), and fileno(), because the system's functions now
work fine.
posix_fsync() is used only #ifdef _WIN32, remove it, and call
_commit() directly.
The old code stuffed WSA error codes into errno, which doesn't work.
Use new w32_set_winsock_errno() to retrieve, convert & stuff into
errno. Adapt inet_ntop() to set the WSA error code instead of errno,
so it can use w32_set_winsock_errno().
Move EWOULDBLOCK from sys/socket.h to w32misc.h, and drop unused
ENOTSOCK, EAFNOSUPPORT.
Use SOCKET rather than int in Windows-specific code.
Unlike POSIX sockets, Windows sockets are not file descriptors, but
"OS handles", with a completely separate set of functions.
However, Windows can create a file descriptor for a socket, and return
a file descriptor's underlying handle. Use that instead of our gross
hacks to keep up the illusion that sockets are file descriptors.
Slightly dirty: we put file descriptors into fd_set. Works because
both boil down to int. Change w32_select(), w32_socket(),
w32_connect(), w32_recv(), w32_writev_socket(), w32_send() to take and
return only file descriptors, and map to sockets internally. Replace
w32_close_socket() by w32_close(), and drop the close() macro hackery
that made tcp_connect(), host_connect() use w32_close_socket(). New
fd_is_socket().
Windows provides select()-like functions only for handles. Because of
that, the client used a handle for reading script files, and stored it
in file descriptor input_fd. Drop this dirty hack, use a file
descriptor instead. Works because we can get its underlying handle.
Remove the dirty macro hackery that made play(), ring_from_file() and
doexecute() unwittingly work with a handle. Remove w32_openhandle()
and w32_close_handle(). Replace w32_readv_handle() by w32_readv_fd().
Update w32_select().
Remove w32_openfd(), it's not really needed.
The old code stuffed WSA error codes into errno, which doesn't work.
Use new w32_set_winsock_errno() to convert & stuff.
Fix signed vs. unsigned warnings in Windows client.
Move the struct sigaction replacement next to the sigaction()
replacement.
Rename sysdep_init() to w32_sysdep_init() for consistency.
When select() gets interrupted by SIGINT while a handler is active
without SA_RESTART, it returns immediately with EINTR. w32_select()
did that only while it waited for standard input to become ready for
reading. This isn't the case when:
* The client has already received EOF on standard input. But then the
action is SIG_DFL, so there was no problem.
* Reading standard input is suspended until the server drains the
input buffer. Then reaction to Ctrl-C got delayed until the socket
got ready, and w32_select() returned normally. Harmless, because
the reaction merely appends to the input buffer.
Change w32_select() to match select()'s behavior anyway.
pthread.c's empth_select() returned 1 instead of 0 when empth_wakeup()
interrupted select(). This made io_input() attempt to read input,
which failed with WSAEWOULDBLOCK. The failure then got propagated all
the way up, and the player got logged out. Fix by returning 0 in that
case.
start_server() creates the thread running player_accept() before it
calls update_init(). However, update_init() initializes stuff used to
player threads: update_time[] and play_lock. In theory, a player
thread could start before that, and crash when taking the
uninitialized play_lock.
Delay starting that tread until after update_init().
A player thread may sleep on input or output, except:
(1) While it is executing a C_MOD command, it may only sleep on input.
(2) While it is being aborted by the update or shutdown, it may not
sleep at all.
To find out whether a player thread may sleep on input, code has to
check condition (2). It needs do to that in recvclient().
To find out whether it may sleep on output, it has to check both
conditions. It needs to do that in pr_player() and upr_player().
The code tracked condition (1) in global variable play_lock_wanted.
It checked condition (2) by examining struct player member command.
Replace all that by new struct player member may_sleep. Initialize it
in player_new(), update it in dispatch(), shutdwn() and update_run().
This makes the tests in recvclient(), pr_player() and upr_player()
obvious. play_wrlock_wanted() is now unused, remove it.
Player threads may only sleep under certain conditions. In
particular, they must not sleep while a command is being aborted by
the update or shutdown.
io.c should not know about that. Yet io_output_all() does, because it
needs to give up when update or shutdown interrupt it. The function
was introduced in Empire 2, but it didn't give up then. Fixed in
commit a7fa7dee, v4.2.22. The fix dragged unwanted knowledge of
command abortion into io.c.
To clean up this mess, io_output_all() has to go.
First user is io_write(). io_write() automatically flushes the queue.
In wait-mode, it calls io_output_all() when the queue is longer than
the bufsize, to attempt flushing the queue completely. In
no-wait-mode, it calls io_output() every bufsize bytes. Except the
test for that is screwy, so it actually misses some of the flush
conditions.
The automatic flush makes io_write() differ from io_gets(), which is
ugly. It wasn't present in BSD Empire 1.1. Remove it again, dropping
io_write()'s last argument.
Flush the queue in its callers pr_player() and upr_player() instead.
Provide new io_output_if_queue_long() for them. Requires new struct
iop member last_out to keep track of queue growth. pr_player() and
upr_player() call repeatedly until it makes no more progress. This
flushes a bit less eagerly in wait-mode, and a bit more eagerly in
non-wait mode.
Second user is recvclient(). It needs to flush the queue before
potentially sleeping in io_input(). Do that with a simple loop around
io_output(). No functional change there.
LWP and Windows implementations already did that. Rewrite the
pthreads implementation.
The write-bias makes the stupid play_wrlock_wanted busy wait in
dispatch() unnecessary. Remove it.
Return number of bytes written on success, -1 on error. In
particular, return zero when nothing was written because the queue was
empty, or because the write slept and got woken up, or because the
write refused to sleep.
Before, it instead returned the number of bytes remaining to be
written when empth_select() failed, when woken up from sleep, or
refusing to sleep. You couldn't tell from the return value whether
the call made progress writing out the queue.
The current callers don't actually notice the change.
Don't set IO_EOF when writev() returns zero. I don't think this could
happen, but it's wrong anyway, because a short write should not stop
future reads.
The blocking I/O option makes no sense in the server, because it
blocks the server process instead of the thread. In fact, it's been
unused since Empire 2, except for one place, where it was used
incorrectly, and got removed in the previous commit.
Make I/O non-blocking in io_open() unconditionally. Remove IO_NBLOCK
and io_noblocking().
The call switched the connection with the player to blocking I/O for
draining of output before closing the connection. Looks scary,
because blocking on I/O blocks the complete server process, not just
the player thread. But we don't do input, and we do output only with
IO_WAIT, which can't block. So this has no effect.
Chainsaw used this together with the notify callback to make the iop
data type usable for sockets it listened on, so that io_select() could
multiplex them along with the sockets used for actual I/O.
io_select() became unused in Empire 2, and finally got removed in
commit 875d72a0, v4.2.13. That made the IO_NEWSOCK and the notify
callback defunct. The latter got removed in commit 7d5a6b81, v4.3.1.
Calculation of sleep duration suffered integer underflow for unsigned
time_t and arguments in the past. This made empth_sleep() sleep for
"a few" years instead of not at all.
F_GETFL always failed with WSAEINVAL. io_noblocking() always failed
without doing anything. Callers didn't check for failure, and newly
opened sockets remained blocking. But because because
WSAEventSelect() makes sockets non-blocking automatically, they became
non-blocking soon enough to keep things working.
Remove the broken code to query the non-blocking state, and just
return 0. Document why this works.
While there, simplify the F_SETFL case by using ioctlsocket() instead
of WSAIoctl().
Replace the fixed $1 per ETU maintenance for capital/city sectors that
are at least 60% efficient by a configurable maintenance cost, payable
regardless of efficiency. The only change in the default
configuration is that inefficient capitals now pay maintenance.
Charging sector maintenance regardless of efficiency is consistent
with unit maintenance.
New struct dchrstr member d_maint and sector-chr selector maint. Make
show_sect_build() show it. Change produce_sect() to record
maintenance in new slot p_sect[SCT_MAINT] instead of abusing
p_sect[SCT_CAPIT]. Replace the "Capital maintenance" line in budget
by "Sector maintenance".
Print sector type mnemonic and name, like show sect s and c. Print
"can't" instead of negative number for sectors players can't designate
(this was not an issue before the previous commit). Show build cost
per 100%, like show ship, land, plane and nuke. Size the columns more
sensibly.
show sect b needs to explain any sector players can build.
show_sect_build() omitted sectors players can't designate. That's
wrong, because players can certainly own and thus build sectors they
can't designate. Test for infinite mobility cost instead, like
show_sect_stats().
Commit 7da69c92 (v4.3.20) removed use of automatic supply from
prod_ship(). It removed bp_enable_cachepath(), but left behind the
final bp_disable_cachepath(); bp_clear_cachepath(). Clean that up.
With etu_per_update large and resource depletion quick, a sector can
produce more work than is required to fully deplete a mine. In that
case, produce() and prod() limit production to what is actually in the
ground. Except produce() got it wrong for sector types with
production efficiency other than 100%.
This affects mountains in the stock game, but only with impractically
large etu_per_update.
configure checked for library functions with LIBS instead of
LIBS_server, which could break detection of getaddrinfo() on systems
where LIB_SOCKET isn't empty.
GNUmakefile put @PTHREAD_LIBS@ only in LDLIBS, which breaks linking of
server and possibly client on systems where it is not empty.
Broken in commit 8b778634.
We use the C run-time, so we better use its _beginthread(), too.
CreateThread() can lead to deadlocks, at least with some versions of
the C run-time. Broken in commit f082ef9f, v4.3.11.
stdin_read_thread() zeroed bounce_status on failure, effectifely
treating it like EOF. Fix by setting to -1.
It treated main thread termination like failure, and set bounce_error
to a bogus value. Can't happen, because the program terminates when
the main thread terminates, and the only user of bounce_error is the
main thread anyway. Regardless, handle the case by terminating,
because that's more obviously correct.
Broken in commit f082ef9f, v4.3.11.
Commit 8c3b8d10 replaced the getpass() for Windows by a generic
ersatz_getpass(). This lost the "switch off echo" feature, with the
excuse that it doesn't work for me (MinGW & Wine). Turns out it works
under real Windows. Restore the feature.
The old upstream version carries the original BSD license, which is
incompatible with the GPL. Fix by rebasing to a version that is
licensed under the 2-clause BSD license.