Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
not heavily tested but these functions appear to work correctly
|
|
shunget cannot unget eof status, causing wcstol to leave endptr
pointing to the wrong place when scanning, for example, L"0x". cheap
fix is to make the read function provide an infinite stream of bogus
characters rather than eof. really this is something of a design flaw
in how the shgetc system is used for strto* and wcsto*; in the long
term, I believe multi-character unget should be scrapped and replaced
with a function that can subtract from the f->shcnt counter.
|
|
|
|
the immediate benefit is a significant debloating of the float parsing
code by moving the responsibility for keeping track of the number of
characters read to a different module.
by linking shgetc with the stdio buffer logic, counting logic is
defered to buffer refill time, keeping the calls to shgetc fast and
light.
in the future, shgetc will also be useful for integrating the new
float code with scanf, which needs to not only count the characters
consumed, but also limit the number of characters read based on field
width specifiers.
shgetc may also become a useful tool for simplifying the integer
parsing code.
|
|
|
|
this version is intended to be fully conformant to the ISO C, POSIX,
and IEEE standards for conversion of decimal/hex floating point
strings to float, double, and long double (ld64 or ld80 only at
present) values. in particular, all results are intended to be rounded
correctly according to the current rounding mode. further, this
implementation aims to set the floating point underflow, overflow, and
inexact flags to reflect the conversion performed.
a moderate amount of testing has been performed (by nsz and myself)
prior to integration of the code in musl, but it still may have bugs.
so far, only strto(d|ld|f) use the new code. scanf integration will be
done as a separate commit, and i will add implementations of the wide
character functions later.
|
|
thanks to the hard work of Szabolcs Nagy (nsz), identifying the best
(from correctness and license standpoint) implementations from freebsd
and openbsd and cleaning them up! musl should now fully support c99
float and long double math functions, and has near-complete complex
math support. tgmath should also work (fully on gcc-compatible
compilers, and mostly on any c99 compiler).
based largely on commit 0376d44a890fea261506f1fc63833e7a686dca19 from
nsz's libm git repo, with some additions (dummy versions of a few
missing long double complex functions, etc.) by me.
various cleanups still need to be made, including re-adding (if
they're correct) some asm functions that were dropped.
|
|
these have not been heavily tested, but they should work as described
in the old standards. probably broken for non-finite values...
|
|
patch by Pascal Cuoq (with minor tweaks to comments)
|
|
this was the cause of crashes in printf when attempting to print
floating point values.
|
|
|
|
1. my interpretation of subject sequence definition was wrong. adjust
parser to conform to the standard.
2. some code for handling tail overflow case was missing (forgot to
finish writing it).
3. typo (= instead of ==) caused ERANGE to wrongly behave like EINVAL
|
|
stopping without letting the parser see a stop character prevented
getting a result. so treat all high chars as the null character and
pass them into the parser.
also eliminated ugly tmp var using compound literals.
|
|
this fixes a number of bugs in integer parsing due to lazy haphazard
wrapping, as well as some misinterpretations of the standard. the new
parser is able to work character-at-a-time or on whole strings, making
it easy to support the wide functions without unbounded space for
conversion. it will also be possible to update scanf to use the new
parser.
|
|
|
|
Smoothsort is an adaptive variant of heapsort. This version was
written by Valentin Ochs (apo) specifically for inclusion in musl. I
worked with him to get it working in O(1) memory usage even with giant
array element widths, and to optimize it heavily for size and speed.
It's still roughly 4 times as large as the old heap sort
implementation, but roughly 20 times faster given an almost-sorted
array of 1M elements (20 being the base-2 log of 1M), i.e. it really
does reduce O(n log n) to O(n) in the mostly-sorted case. It's still
somewhat slower than glibc's Introsort for random input, but now
considerably faster than glibc when the input is already sorted, or
mostly sorted.
|
|
|
|
0e10000000000000000000000000000000 was setting ERANGE
exponent char e/p was considered part of the match even if not
followed by a valid decimal value
"1e +10" was parsed as "1e+10"
hex digits were misinterpreted as 0..5 instead of 10..15
|
|
sadly the C language does not specify any such implicit conversion, so
this is not a matter of just fixing warnings (as gcc treats it) but
actual errors. i would like to revisit a number of these changes and
possibly revise the types used to reduce the number of casts required.
|
|
this is actually a workaround for a bug in gcc, whereby it asserts
inequality of the keys being compared...
|
|
|
|
|