drafts, hunchentoot-v: Update
authorLucian Mogosanu <lucian@mogosanu.ro>
Tue, 13 Aug 2019 09:26:38 +0000 (12:26 +0300)
committerLucian Mogosanu <lucian@mogosanu.ro>
Tue, 13 Aug 2019 09:26:38 +0000 (12:26 +0300)
drafts/000-hunchentoot-v.markdown

index 39e2d5d..9b84bcc 100644 (file)
@@ -75,7 +75,7 @@ sent to the client.
 
 Below I'll detail (top-down) the implementation of
 handle-incoming-connection, shutdown and (the additional)
-initialize-instance -- the execute-acceptor executed is that of the
+initialize-instance -- the execute-acceptor used is that of the
 [superclass](#mtt-ea).
 
 <a name="otpct-ii" href="#otpct-hic">[otpct-ii]</a>
@@ -89,10 +89,15 @@ case, max-accept-count doesn't really make sense, because all new
 connections are accepted.
 
 <a name="otpct-hic" href="#otpct-hic">[otpct-hic]</a>
-[**handle-incoming-connection**][ht-otpct-hic]:
+[**handle-incoming-connection**][ht-otpct-hic]: Calls
+[create-request-handler-thread](#otpct-crht); in other words, it
+creates a new thread to handle requests associated with the current
+connection.
 
 <a name="otpct-s3" href="#otpct-s3">[otpct-s3]</a>
-[**shutdown**][ht-otpct-s3]:
+[**shutdown**][ht-otpct-s3]: Joins (in the Unix sense of "thread
+join") the acceptor-process, i.e. the [listener](#mtt-ea) thread and
+returns the current taskmaster.
 
 As observed, these methods are implemented using the following
 "support" methods and functions:
@@ -117,25 +122,75 @@ under max-accept-count, notifies listener via
 handled.
 
 <a name="otpct-nfc" href="#otpct-nfc">[otpct-nfc]</a>
-[**note-free-connection**][ht-otpct-nfc]:
+[**note-free-connection**][ht-otpct-nfc]: [Signals][ht-cvs] the
+taskmaster's wait-queue; as the name suggests, it's used when there
+are "free" "slots" available for connections to be handled.
 
 <a name="otpct-wffc" href="#otpct-wffc">[otpct-wffc]</a>
-[**wait-for-free-connection**][ht-otpct-wffc]:
+[**wait-for-free-connection**][ht-otpct-wffc]: [Waits][ht-cvw] for
+"free" connection "slots" on the taskmaster's wait-queue; used when
+there aren't (yet) enough resources to process a given connection.
 
 <a name="otpct-tmtr" href="#otpct-tmtr">[otpct-tmtr]</a>
-[**too-many-taskmaster-requests**][ht-otpct-tmtr]:
+[**too-many-taskmaster-requests**][ht-otpct-tmtr]: Calls
+[acceptor-log-message][ht-alm]; logs the situation when the
+taskmaster's wait-queue is full or, if max-accept-count isn't set,
+that the thread-count has reached the ceiling i.e. max-thread-count.
 
 <a name="otpct-crht" href="#otpct-crht">[otpct-crht]</a>
-[**create-request-handler-thread**][ht-otpct-crht]:
+[**create-request-handler-thread**][ht-otpct-crht]: a. Wrapped in a
+[handler-case\*][ht-hcs]; b. start a new thread; c. which calls
+[handle-incoming-connection%](#otpct-hic2). In case of errors,
+d1. close the current connection's socket stream, aborting the
+connection; and d2. log the error.
 
 <a name="otpct-hic2" href="#otpct-hic2">[otpct-hic2]</a>
-[**handle-incoming-connection%**][ht-otpct-hic]:
+[**handle-incoming-connection%**][ht-otpct-hic]: The description
+contained in the function's definition is pretty good, but
+nevertheless, let's look at this in more detail:
+a. [increment-taskmaster-accept-count](#otpct-itac); b. create a local
+binding for process-connection%, which b1. calls
+[process-connection][ht-pc] b2. with the [thread-count
+incremented](#otpct-ittc); c. implement the logic described below.
+
+c1. if thread-count is null, then [process-connection][ht-pc];
+otherwise, c2. *if* either max-accept-count is set and accept-count is
+at this threshold, *or* max-accept-count isn't set and thread-count is
+at the max-thread-count threshold, *then* call
+[too-many-taskmaster-requests](#otpct-tmtr) and
+[send-service-unavailable-reply](#otpct-ssur), which ends the current
+connection; otherwise, c3. *if* max-accept-count is set *and*
+thread-count is larger than max-thread-count *then*
+[wait-for-free-connection](#otpct-wffc), then, when unblocked,
+process-connection%; otherwise, c4. process-connection%.
+
+As can be observed, handle-incoming-connection% implements the bulk of
+the decision-making process for one-thread-per-connection
+taskmasters. This isn't *very* difficult to wrap one's head around,
+despite the apparent gnarl; simplifications, at the very least of an
+aesthetic nature, are possible, I'll leave them as a potential
+exploration exercise for a later date -- or, if the reader desires to
+chime in...
 
 <a name="otpct-ssur" href="#otpct-ssur">[otpct-ssur]</a>
-[**send-service-unavailable-reply**][ht-otpct-ssur]:
+[**send-service-unavailable-reply**][ht-otpct-ssur]: Yet another pile
+of gnarl. Wraps everything in an [unwind-protect][clhs-up] and catches
+all potential conditions. In this context, it sends a
+http-service-unavailable message with the content set to the text
+returned by [acceptor-status-message][ht-asm].
+
+At the end, [decrement-taskmaster-accept-count](#otpct-dtac) and flush
+and close the connection stream.
 
 <a name="otpct-cas" href="#otpct-cas">[otpct-cas]</a>
-[**client-as-string**][ht-otpct-cas]:
+[**client-as-string**][ht-otpct-cas]: Convenience function used by
+[create-request-handler-thread](#otpct-crht) to give a name to the
+thread to be created, of the form "address:port".
+
+As can be seen, this is still a monster, albeit more organized than
+its [acceptor][ht-iv] brother. At this point, the big remaining pieces
+are requests, replies and handler dispatchers, which should provide us
+with almost[^3] everything we need to actually have a Hunchentoot.
 
 [^1]: Though all methods are piled together in the same place and,
     say, default "thread count" implementations are provided for some
@@ -153,8 +208,16 @@ handled.
     Oh, it could be extended? Well, show me one of these
     extensions. And if there are any, why aren't they in Hunchentoot?
 
+[^3]: There's tons of [glue][tmsr-work-iv] poured around this set of
+    fundamentally TCPistic web-server-pieces. Some of this glue I've
+    already reviewed proceeding for the components that use it; some
+    of it I haven't, and sooner or later I will, if only to establish
+    what's to stay and what's to go once I start cleaning the whole
+    thing up.
+
 [apache-mpm]: https://httpd.apache.org/docs/2.2/en/mpm.html
 [ht-iii]: /posts/y06/097-hunchentoot-iii.html
+[ht-iv]: /posts/y06/098-hunchentoot-iv.html
 [ht-stt]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L127
 [ht-stt-ea]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L137
 [ht-stt-hic]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L141
@@ -180,9 +243,16 @@ handled.
 [ht-otpct-hic2]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L359
 [ht-otpct-ssur]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L392
 [ht-otpct-cas]: http://coad.thetarpit.org/hunchentoot/c-taskmaster.lisp.html#L416
+[ht-cvs]: http://coad.thetarpit.org/hunchentoot/c-compat.lisp.html#L132
+[ht-cvw]: http://coad.thetarpit.org/hunchentoot/c-compat.lisp.html#L135
+[ht-hcs]: http://coad.thetarpit.org/hunchentoot/c-conditions.lisp.html#L121
 [ht-s]: /posts/y06/098-hunchentoot-iv.html#s
 [ht-s2]: /posts/y06/098-hunchentoot-iv.html#s2
 [ht-ac]: /posts/y06/098-hunchentoot-iv.html#ac
 [ht-pc]: /posts/y06/098-hunchentoot-iv.html#pc
+[ht-alm]: /posts/y06/098-hunchentoot-iv.html#alm
+[ht-asm]: /posts/y06/098-hunchentoot-iv.html#asm
 [ht-ifdefism]: /posts/y06/098-hunchentoot-iv.html#selection-195.296-199.206
 [tlogz-1926874]: http://logs.nosuchlabs.com/log/trilema/2019-08-09#1926874
+[clhs-up]: http://clhs.lisp.se/Body/s_unwind.htm
+[tmsr-work-iv]: /posts/y06/099-tmsr-work-iv.html#selection-139.61-139.151