The introduction of include_paths made me realize that adding more
parser options (e.g. namespace configuration) would become unwieldy.
This change generalizes parser options as a keyword list with formal
type definitions for supported values. At the moment, :include_paths is
the only option, but it's now much more obvious how to add new ones.
We previously only supported including other Thrift files whose
pathnames were relative to the current file. This change allows a list
of additional include (search) paths to be specified as part of the
project configuration (`thrift.include_paths`) or via a command line
option (`--include dir`).
We now use ThriftTest.thrift file from the Apache Thrift distribution
(original license retained) as a fully representative test file.
We also include StressTest.thrift as an simpler additional test file for
those test cases that work on multiple input files.
Lastly, our tests now clean up generated files `on_exit`. Otherwise,
the last test would always leave its generated files.
This change introduces an intermediate thrift/ directory into the test
script hierarchy. This matches the lib/thrift/ layout and makes it
clearer which top-level test/ entries are for test support versus test
scripts.
start_link/5's @spec wasn't updated when it signature was last changed.
Also add a @spec to init/6 and mark some internal functions as private.
We only support :ranch_tcp at the moment, so that's explicitly enforced
in both the typespecs as well as the init/6 function head.
* Add `values/0` to generated enum module
This came up with one of my use cases. Seems a reasonable counterpoint
to `names/0`.
* Change values/names to use meta, add enum functions
We were still using OptionParser.parse!/1 in the compiler task from back
when we also supported interactive usage. Unfortunately, that function
throws an exception when it encounters an unknown (invalid) option.
Instead, we use OptionParser.parse/1, which allows us to safely discard
invalid options.
This also disables the "Readability.Specs" check because it appears to
crash Credo when it encounters some of our macro-ized code. That can be
investigated separately.
This change moves all compiler options under a single top-level `thrift`
configuration key that contains a Keyword list of individual options.
`thrift_files` and `thrift_output` are now just `files` and `output_path`
within this sub-list.
The goal of this change is to tidy up the top-level configuration space
and provide for further expansion of our compiler option set.
* Catch name collisions within type
If two names of the same type (for example, two structs) exist in the same
file, one of them will be discarded silently at parse time. This causes it to
raise an exception.
* Clean up exception message
My previous attempt to consolidate these two mix tasks into one was
unsuccessful due to the way mix tasks receive their command line
arguments. Specifically, we don't have a good way to differentiate
between e.g. a `mix test` command line containing test script filenames
and a `mix compile.thrift` command line naming Thrift schema files. When
the `:thrift` compiler is included, it will also run in the `mix test`
task flow and get confused about its inputs.
We now have two distinct and purpose-built mix tasks:
- `compile.thrift` - intended for use in the Mix.compilers list
- `thrift.generate` - intended for interactive command-line use
Previously the resolver was a separate process that just maintained a map. This
had the drawback that it was difficult to catch errors raised during the
resolution process. We'd to be able to raise such an error on name collisions.
We previously defined nearly identical sets of constants representing
the binary protocol's field types as attributes in each module that
required them. This duplication isn't great for long-term maintenance.
This approach defines a single set of type constants as macros in a
new Thrift.Protocol.Binary.Type module. Using macros lets us preserve
all of the compile-time benefits we were getting from the previous
approach while still supporting code sharing.
* Timeout / Retry refactor
Servers now support read timeouts
Additionally, I changed how we handle retries due to some idiosyncracies
with how erlang responds to server failures.
When the server closes the connection, the client's `:gen_tcp.send`
operation will always succeed, failing instead on the corresponding
`:gen_tcp.recv` command. This is problematic, because we can't know for
sure that a message has been sent, and if we retry, we might resend a
message accidentally.
Furthermore, we can _never_ really know if a oneway message has been
sent or not.
Admittedly, the window for this is pretty narrow, the server would have
to sever the connection between the send and recv calls. The next case
is more troublesome.
Since we have backoff behavior, imagine if the user has set a retry
count of `:infinity` and made a call to a dead server. Now the client is
backing off and the `GenServer` call will timeout after 5 seconds. The
client is now stuck in a loop, reconnecting forever.
In light of these issues, I've removed repeated reconnects from the
client and have implemented a one-shot reconnect. We can still send a
message twice, but the window is sufficiently small to make me not worry
so much. This also has the effect of improving UX in the light of a
server that disconnects clients after a short timeout. If you send a
message on a disconnected client, it remembers it, reconnects and sends
it.
That seems reasonable.
Generated module names can be quite long, so this pattern produces very long
lines because Macro.to_string does not wrap pipe operators to the next line.
serialized_var = %Big.Generated.Module{} |> Big.Generated.Module.serialize()
This change just replaces it with a slightly more readable:
var = %Big.Generated.Module{}
Big.GeneratedModule.serialize(var)