Mercurial > hgrepos > Python2 > PyMuPDF
diff mupdf-source/thirdparty/curl/docs/TODO @ 2:b50eed0cc0ef upstream
ADD: MuPDF v1.26.7: the MuPDF source as downloaded by a default build of PyMuPDF 1.26.4.
The directory name has changed: no version number in the expanded directory now.
| author | Franz Glasner <fzglas.hg@dom66.de> |
|---|---|
| date | Mon, 15 Sep 2025 11:43:07 +0200 |
| parents | |
| children |
line wrap: on
line diff
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mupdf-source/thirdparty/curl/docs/TODO Mon Sep 15 11:43:07 2025 +0200 @@ -0,0 +1,1127 @@ + _ _ ____ _ + ___| | | | _ \| | + / __| | | | |_) | | + | (__| |_| | _ <| |___ + \___|\___/|_| \_\_____| + + Things that could be nice to do in the future + + Things to do in project curl. Please tell us what you think, contribute and + send us patches that improve things! + + Be aware that these are things that we could do, or have once been considered + things we could do. If you want to work on any of these areas, please + consider bringing it up for discussions first on the mailing list so that we + all agree it is still a good idea for the project! + + All bugs documented in the KNOWN_BUGS document are subject for fixing! + + 1. libcurl + 1.1 TFO support on Windows + 1.3 struct lifreq + 1.5 get rid of PATH_MAX + 1.7 Support HTTP/2 for HTTP(S) proxies + 1.8 CURLOPT_RESOLVE for any port number + 1.9 Cache negative name resolves + 1.10 auto-detect proxy + 1.11 minimize dependencies with dynamically loaded modules + 1.12 updated DNS server while running + 1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION + 1.14 Typesafe curl_easy_setopt() + 1.15 Monitor connections in the connection pool + 1.16 Try to URL encode given URL + 1.17 Add support for IRIs + 1.18 try next proxy if one doesn't work + 1.20 SRV and URI DNS records + 1.22 CURLINFO_PAUSE_STATE + 1.23 Offer API to flush the connection pool + 1.24 TCP Fast Open for windows + 1.25 Expose tried IP addresses that failed + 1.27 hardcode the "localhost" addresses + 1.28 FD_CLOEXEC + 1.29 Upgrade to websockets + 1.30 config file parsing + + 2. libcurl - multi interface + 2.1 More non-blocking + 2.2 Better support for same name resolves + 2.3 Non-blocking curl_multi_remove_handle() + 2.4 Split connect and authentication process + 2.5 Edge-triggered sockets should work + 2.6 multi upkeep + + 3. Documentation + 3.2 Provide cmake config-file + + 4. FTP + 4.1 HOST + 4.2 Alter passive/active on failure and retry + 4.3 Earlier bad letter detection + 4.5 ASCII support + 4.6 GSSAPI via Windows SSPI + 4.7 STAT for LIST without data connection + 4.8 Option to ignore private IP addresses in PASV response + + 5. HTTP + 5.1 Better persistency for HTTP 1.0 + 5.3 Rearrange request header order + 5.4 Allow SAN names in HTTP/2 server push + 5.5 auth= in URLs + + 6. TELNET + 6.1 ditch stdin + 6.2 ditch telnet-specific select + 6.3 feature negotiation debug data + + 7. SMTP + 7.2 Enhanced capability support + 7.3 Add CURLOPT_MAIL_CLIENT option + + 8. POP3 + 8.2 Enhanced capability support + + 9. IMAP + 9.1 Enhanced capability support + + 10. LDAP + 10.1 SASL based authentication mechanisms + + 11. SMB + 11.1 File listing support + 11.2 Honor file timestamps + 11.3 Use NTLMv2 + 11.4 Create remote directories + + 12. New protocols + + 13. SSL + 13.2 Provide mutex locking API + 13.3 Support in-memory certs/ca certs/keys + 13.4 Cache/share OpenSSL contexts + 13.5 Export session ids + 13.6 Provide callback for cert verification + 13.7 improve configure --with-ssl + 13.8 Support DANE + 13.10 Support Authority Information Access certificate extension (AIA) + 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY + 13.12 Support HSTS + 13.14 Support the clienthello extension + + 14. GnuTLS + 14.2 check connection + + 15. WinSSL/SChannel + 15.1 Add support for client certificate authentication + 15.3 Add support for the --ciphers option + 15.4 Add option to disable client certificate auto-send + + 16. SASL + 16.1 Other authentication mechanisms + 16.2 Add QOP support to GSSAPI authentication + 16.3 Support binary messages (i.e.: non-base64) + + 17. SSH protocols + 17.1 Multiplexing + 17.3 Support better than MD5 hostkey hash + 17.4 Support CURLOPT_PREQUOTE + + 18. Command line tool + 18.1 sync + 18.2 glob posts + 18.3 prevent file overwriting + 18.5 UTF-8 filenames in Content-Disposition + 18.7 at least N milliseconds between requests + 18.9 Choose the name of file in braces for complex URLs + 18.10 improve how curl works in a windows console window + 18.11 Windows: set attribute 'archive' for completed downloads + 18.12 keep running, read instructions from pipe/socket + 18.15 --retry should resume + 18.16 send only part of --data + 18.17 consider file name from the redirected URL with -O ? + 18.18 retry on network is unreachable + 18.19 expand ~/ in config files + 18.20 host name sections in config files + + 19. Build + 19.1 roffit + 19.2 Enable PIE and RELRO by default + 19.3 cmake test suite improvements + + 20. Test suite + 20.1 SSL tunnel + 20.2 nicer lacking perl message + 20.3 more protocols supported + 20.4 more platforms supported + 20.5 Add support for concurrent connections + 20.6 Use the RFC6265 test suite + 20.7 Support LD_PRELOAD on macOS + + 21. Next SONAME bump + 21.1 http-style HEAD output for FTP + 21.2 combine error codes + 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype + + 22. Next major release + 22.1 cleanup return codes + 22.2 remove obsolete defines + 22.3 size_t + 22.4 remove several functions + 22.5 remove CURLOPT_FAILONERROR + 22.7 remove progress meter from libcurl + 22.8 remove 'curl_httppost' from public + +============================================================================== + +1. libcurl + +1.1 TFO support on Windows + + TCP Fast Open is supported on several platforms but not on Windows. Work on + this was once started but never finished. + + See https://github.com/curl/curl/pull/3378 + +1.3 struct lifreq + + Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and + SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete. + To support IPv6 interface addresses for network interfaces properly. + +1.5 get rid of PATH_MAX + + Having code use and rely on PATH_MAX is not nice: + https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html + + Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from + there we need libssh2 to properly tell us when we pass in a too small buffer + and its current API (as of libssh2 1.2.7) doesn't. + +1.7 Support HTTP/2 for HTTP(S) proxies + + Support for doing HTTP/2 to HTTP and HTTPS proxies is still missing. + + See https://github.com/curl/curl/issues/3570 + +1.8 CURLOPT_RESOLVE for any port number + + This option allows applications to set a replacement IP address for a given + host + port pair. Consider making support for providing a replacement address + for the host name on all port numbers. + + See https://github.com/curl/curl/issues/1264 + +1.9 Cache negative name resolves + + A name resolve that has failed is likely to fail when made again within a + short period of time. Currently we only cache positive responses. + +1.10 auto-detect proxy + + libcurl could be made to detect the system proxy setup automatically and use + that. On Windows, macOS and Linux desktops for example. + + The pull-request to use libproxy for this was deferred due to doubts on the + reliability of the dependency and how to use it: + https://github.com/curl/curl/pull/977 + + libdetectproxy is a (C++) library for detecting the proxy on Windows + https://github.com/paulharris/libdetectproxy + +1.11 minimize dependencies with dynamically loaded modules + + We can create a system with loadable modules/plug-ins, where these modules + would be the ones that link to 3rd party libs. That would allow us to avoid + having to load ALL dependencies since only the necessary ones for this + app/invoke/used protocols would be necessary to load. See + https://github.com/curl/curl/issues/349 + +1.12 updated DNS server while running + + If /etc/resolv.conf gets updated while a program using libcurl is running, it + is may cause name resolves to fail unless res_init() is called. We should + consider calling res_init() + retry once unconditionally on all name resolve + failures to mitigate against this. Firefox works like that. Note that Windows + doesn't have res_init() or an alternative. + + https://github.com/curl/curl/issues/2251 + +1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION + + curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and + close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares + does not use those functions and instead opens and closes the sockets + itself. This means that when curl passes the c-ares socket to the + CURLMOPT_SOCKETFUNCTION it isn't owned by the application like other sockets. + + See https://github.com/curl/curl/issues/2734 + +1.14 Typesafe curl_easy_setopt() + + One of the most common problems in libcurl using applications is the lack of + type checks for curl_easy_setopt() which happens because it accepts varargs + and thus can take any type. + + One possible solution to this is to introduce a few different versions of the + setopt version for the different kinds of data you can set. + + curl_easy_set_num() - sets a long value + + curl_easy_set_large() - sets a curl_off_t value + + curl_easy_set_ptr() - sets a pointer + + curl_easy_set_cb() - sets a callback PLUS its callback data + +1.15 Monitor connections in the connection pool + + libcurl's connection cache or pool holds a number of open connections for the + purpose of possible subsequent connection reuse. It may contain a few up to a + significant amount of connections. Currently, libcurl leaves all connections + as they are and first when a connection is iterated over for matching or + reuse purpose it is verified that it is still alive. + + Those connections may get closed by the server side for idleness or they may + get a HTTP/2 ping from the peer to verify that they're still alive. By adding + monitoring of the connections while in the pool, libcurl can detect dead + connections (and close them) better and earlier, and it can handle HTTP/2 + pings to keep such ones alive even when not actively doing transfers on them. + +1.16 Try to URL encode given URL + + Given a URL that for example contains spaces, libcurl could have an option + that would try somewhat harder than it does now and convert spaces to %20 and + perhaps URL encoded byte values over 128 etc (basically do what the redirect + following code already does). + + https://github.com/curl/curl/issues/514 + +1.17 Add support for IRIs + + IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly + support this, curl/libcurl would need to translate/encode the given input + from the input string encoding into percent encoded output "over the wire". + + To make that work smoothly for curl users even on Windows, curl would + probably need to be able to convert from several input encodings. + +1.18 try next proxy if one doesn't work + + Allow an application to specify a list of proxies to try, and failing to + connect to the first go on and try the next instead until the list is + exhausted. Browsers support this feature at least when they specify proxies + using PACs. + + https://github.com/curl/curl/issues/896 + +1.20 SRV and URI DNS records + + Offer support for resolving SRV and URI DNS records for libcurl to know which + server to connect to for various protocols (including HTTP!). + +1.22 CURLINFO_PAUSE_STATE + + Return information about the transfer's current pause state, in both + directions. https://github.com/curl/curl/issues/2588 + +1.23 Offer API to flush the connection pool + + Sometimes applications want to flush all the existing connections kept alive. + An API could allow a forced flush or just a forced loop that would properly + close all connections that have been closed by the server already. + +1.24 TCP Fast Open for windows + + libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and + Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607 + and we should add support for it. + +1.25 Expose tried IP addresses that failed + + When libcurl fails to connect to a host, it should be able to offer the + application the list of IP addresses that were used in the attempt. + + https://github.com/curl/curl/issues/2126 + +1.27 hardcode the "localhost" addresses + + There's this new spec getting adopted that says "localhost" should always and + unconditionally be a local address and not get resolved by a DNS server. A + fine way for curl to fix this would be to simply hard-code the response to + 127.0.0.1 and/or ::1 (depending on what IP versions that are requested). This + is what the browsers probably will do with this hostname. + + https://bugzilla.mozilla.org/show_bug.cgi?id=1220810 + + https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02 + +1.28 FD_CLOEXEC + + It sets the close-on-exec flag for the file descriptor, which causes the file + descriptor to be automatically (and atomically) closed when any of the + exec-family functions succeed. Should probably be set by default? + + https://github.com/curl/curl/issues/2252 + +1.29 Upgrade to websockets + + libcurl could offer a smoother path to get to a websocket connection. + See https://github.com/curl/curl/issues/3523 + + Michael Kaufmann suggestion here: + https://curl.haxx.se/video/curlup-2017/2017-03-19_05_Michael_Kaufmann_Websocket_support_for_curl.mp4 + +1.30 config file parsing + + Consider providing an API, possibly in a separate companion library, for + parsing a config file like curl's -K/--config option to allow applications to + get the same ability to read curl options from files. + + See https://github.com/curl/curl/issues/3698 + +2. libcurl - multi interface + +2.1 More non-blocking + + Make sure we don't ever loop because of non-blocking sockets returning + EWOULDBLOCK or similar. Blocking cases include: + + - Name resolves on non-windows unless c-ares or the threaded resolver is used + - SOCKS proxy handshakes + - file:// transfers + - TELNET transfers + - The "DONE" operation (post transfer protocol-specific actions) for the + protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task. + +2.2 Better support for same name resolves + + If a name resolve has been initiated for name NN and a second easy handle + wants to resolve that name as well, make it wait for the first resolve to end + up in the cache instead of doing a second separate resolve. This is + especially needed when adding many simultaneous handles using the same host + name when the DNS resolver can get flooded. + +2.3 Non-blocking curl_multi_remove_handle() + + The multi interface has a few API calls that assume a blocking behavior, like + add_handle() and remove_handle() which limits what we can do internally. The + multi API need to be moved even more into a single function that "drives" + everything in a non-blocking manner and signals when something is done. A + remove or add would then only ask for the action to get started and then + multi_perform() etc still be called until the add/remove is completed. + +2.4 Split connect and authentication process + + The multi interface treats the authentication process as part of the connect + phase. As such any failures during authentication won't trigger the relevant + QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP. + +2.5 Edge-triggered sockets should work + + The multi_socket API should work with edge-triggered socket events. One of + the internal actions that need to be improved for this to work perfectly is + the 'maxloops' handling in transfer.c:readwrite_data(). + +2.6 multi upkeep + + In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works + on easy handles. We should introduces a version of that for the multi handle, + and also consider doing "upkeep" automatically on connections in the + connection pool when the multi handle is in used. + + See https://github.com/curl/curl/issues/3199 + +3. Documentation + +3.2 Provide cmake config-file + + A config-file package is a set of files provided by us to allow applications + to write cmake scripts to find and use libcurl easier. See + https://github.com/curl/curl/issues/885 + +4. FTP + +4.1 HOST + + HOST is a command for a client to tell which host name to use, to offer FTP + servers named-based virtual hosting: + + https://tools.ietf.org/html/rfc7151 + +4.2 Alter passive/active on failure and retry + + When trying to connect passively to a server which only supports active + connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the + connection. There could be a way to fallback to an active connection (and + vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793 + +4.3 Earlier bad letter detection + + Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the + process to avoid doing a resolve and connect in vain. + +4.5 ASCII support + + FTP ASCII transfers do not follow RFC959. They don't convert the data + accordingly. + +4.6 GSSAPI via Windows SSPI + + In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5) + via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add + support for GSSAPI authentication via Windows SSPI. + +4.7 STAT for LIST without data connection + + Some FTP servers allow STAT for listing directories instead of using LIST, + and the response is then sent over the control connection instead of as the + otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT + + This is not detailed in any FTP specification. + +4.8 Option to ignore private IP addresses in PASV response + + Some servers respond with and some other FTP client implementations can + ignore private (RFC 1918 style) IP addresses when received in PASV responses. + To consider for libcurl as well. See https://github.com/curl/curl/issues/1455 + +5. HTTP + +5.1 Better persistency for HTTP 1.0 + + "Better" support for persistent connections over HTTP 1.0 + https://curl.haxx.se/bug/feature.cgi?id=1089001 + +5.3 Rearrange request header order + + Server implementors often make an effort to detect browser and to reject + clients it can detect to not match. One of the last details we cannot yet + control in libcurl's HTTP requests, which also can be exploited to detect + that libcurl is in fact used even when it tries to impersonate a browser, is + the order of the request headers. I propose that we introduce a new option in + which you give headers a value, and then when the HTTP request is built it + sorts the headers based on that number. We could then have internally created + headers use a default value so only headers that need to be moved have to be + specified. + +5.4 Allow SAN names in HTTP/2 server push + + curl only allows HTTP/2 push promise if the provided :authority header value + exactly matches the host name given in the URL. It could be extended to allow + any name that would match the Subject Alternative Names in the server's TLS + certificate. + + See https://github.com/curl/curl/pull/3581 + +5.5 auth= in URLs + + Add the ability to specify the preferred authentication mechanism to use by + using ;auth=<mech> in the login part of the URL. + + For example: + + http://test:pass;auth=NTLM@example.com would be equivalent to specifying + --user test:pass;auth=NTLM or --user test:pass --ntlm from the command line. + + Additionally this should be implemented for proxy base URLs as well. + + +6. TELNET + +6.1 ditch stdin + + Reading input (to send to the remote server) on stdin is a crappy solution + for library purposes. We need to invent a good way for the application to be + able to provide the data to send. + +6.2 ditch telnet-specific select + + Move the telnet support's network select() loop go away and merge the code + into the main transfer loop. Until this is done, the multi interface won't + work for telnet. + +6.3 feature negotiation debug data + + Add telnet feature negotiation data to the debug callback as header data. + + +7. SMTP + +7.2 Enhanced capability support + + Add the ability, for an application that uses libcurl, to obtain the list of + capabilities returned from the EHLO command. + +7.3 Add CURLOPT_MAIL_CLIENT option + + Rather than use the URL to specify the mail client string to present in the + HELO and EHLO commands, libcurl should support a new CURLOPT specifically for + specifying this data as the URL is non-standard and to be honest a bit of a + hack ;-) + + Please see the following thread for more information: + https://curl.haxx.se/mail/lib-2012-05/0178.html + + +8. POP3 + +8.2 Enhanced capability support + + Add the ability, for an application that uses libcurl, to obtain the list of + capabilities returned from the CAPA command. + +9. IMAP + +9.1 Enhanced capability support + + Add the ability, for an application that uses libcurl, to obtain the list of + capabilities returned from the CAPABILITY command. + +10. LDAP + +10.1 SASL based authentication mechanisms + + Currently the LDAP module only supports ldap_simple_bind_s() in order to bind + to an LDAP server. However, this function sends username and password details + using the simple authentication mechanism (as clear text). However, it should + be possible to use ldap_bind_s() instead specifying the security context + information ourselves. + +11. SMB + +11.1 File listing support + +Add support for listing the contents of a SMB share. The output should probably +be the same as/similar to FTP. + +11.2 Honor file timestamps + +The timestamp of the transferred file should reflect that of the original file. + +11.3 Use NTLMv2 + +Currently the SMB authentication uses NTLMv1. + +11.4 Create remote directories + +Support for creating remote directories when uploading a file to a directory +that doesn't exist on the server, just like --ftp-create-dirs. + +12. New protocols + +13. SSL + +13.2 Provide mutex locking API + + Provide a libcurl API for setting mutex callbacks in the underlying SSL + library, so that the same application code can use mutex-locking + independently of OpenSSL or GnutTLS being used. + +13.3 Support in-memory certs/ca certs/keys + + You can specify the private and public keys for SSH/SSL as file paths. Some + programs want to avoid using files and instead just pass them as in-memory + data blobs. There's probably a challenge to make this work across the + plethory of different TLS and SSH backends that curl supports. + https://github.com/curl/curl/issues/2310 + +13.4 Cache/share OpenSSL contexts + + "Look at SSL cafile - quick traces look to me like these are done on every + request as well, when they should only be necessary once per SSL context (or + once per handle)". The major improvement we can rather easily do is to make + sure we don't create and kill a new SSL "context" for every request, but + instead make one for every connection and re-use that SSL context in the same + style connections are re-used. It will make us use slightly more memory but + it will libcurl do less creations and deletions of SSL contexts. + + Technically, the "caching" is probably best implemented by getting added to + the share interface so that easy handles who want to and can reuse the + context specify that by sharing with the right properties set. + + https://github.com/curl/curl/issues/1110 + +13.5 Export session ids + + Add an interface to libcurl that enables "session IDs" to get + exported/imported. Cris Bailiff said: "OpenSSL has functions which can + serialise the current SSL state to a buffer of your choice, and recover/reset + the state from such a buffer at a later date - this is used by mod_ssl for + apache to implement and SSL session ID cache". + +13.6 Provide callback for cert verification + + OpenSSL supports a callback for customised verification of the peer + certificate, but this doesn't seem to be exposed in the libcurl APIs. Could + it be? There's so much that could be done if it were! + +13.7 improve configure --with-ssl + + make the configure --with-ssl option first check for OpenSSL, then GnuTLS, + then NSS... + +13.8 Support DANE + + DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL + keys and certs over DNS using DNSSEC as an alternative to the CA model. + https://www.rfc-editor.org/rfc/rfc6698.txt + + An initial patch was posted by Suresh Krishnaswamy on March 7th 2013 + (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple + approach. See Daniel's comments: + https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the + correct library to base this development on. + + Björn Stenberg wrote a separate initial take on DANE that was never + completed. + +13.10 Support Authority Information Access certificate extension (AIA) + + AIA can provide various things like CRLs but more importantly information + about intermediate CA certificates that can allow validation path to be + fulfilled when the HTTPS server doesn't itself provide them. + + Since AIA is about downloading certs on demand to complete a TLS handshake, + it is probably a bit tricky to get done right. + + See https://github.com/curl/curl/issues/2793 + +13.11 Support intermediate & root pinning for PINNEDPUBLICKEY + + CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root + certificates when comparing the pinned keys. Therefore it is not compatible + with "HTTP Public Key Pinning" as there also intermediate and root certificates + can be pinned. This is very useful as it prevents webadmins from "locking + themself out of their servers". + + Adding this feature would make curls pinning 100% compatible to HPKP and allow + more flexible pinning. + +13.12 Support HSTS + + "HTTP Strict Transport Security" is TOFU (trust on first use), time-based + features indicated by a HTTP header send by the webserver. It is widely used + in browsers and it's purpose is to prevent insecure HTTP connections after + a previous HTTPS connection. It protects against SSLStripping attacks. + + Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security + RFC 6797: https://tools.ietf.org/html/rfc6797 + +13.14 Support the clienthello extension + + Certain stupid networks and middle boxes have a problem with SSL handshake + pakets that are within a certain size range because how that sets some bits + that previously (in older TLS version) were not set. The clienthello + extension adds padding to avoid that size range. + + https://tools.ietf.org/html/rfc7685 + https://github.com/curl/curl/issues/2299 + +14. GnuTLS + +14.2 check connection + + Add a way to check if the connection seems to be alive, to correspond to the + SSL_peak() way we use with OpenSSL. + +15. WinSSL/SChannel + +15.1 Add support for client certificate authentication + + WinSSL/SChannel currently makes use of the OS-level system and user + certificate and private key stores. This does not allow the application + or the user to supply a custom client certificate using curl or libcurl. + + Therefore support for the existing -E/--cert and --key options should be + implemented by supplying a custom certificate to the SChannel APIs, see: + - Getting a Certificate for Schannel + https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx + +15.3 Add support for the --ciphers option + + The cipher suites used by WinSSL/SChannel are configured on an OS-level + instead of an application-level. This does not allow the application or + the user to customize the configured cipher suites using curl or libcurl. + + Therefore support for the existing --ciphers option should be implemented + by mapping the OpenSSL/GnuTLS cipher suites to the SChannel APIs, see + - Specifying Schannel Ciphers and Cipher Strengths + https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx + +15.4 Add option to disable client certificate auto-send + + Microsoft says "By default, Schannel will, with no notification to the client, + attempt to locate a client certificate and send it to the server." That could + be considered a privacy violation and unexpected. + + Some Windows users have come to expect that default behavior and to change the + default to make it consistent with other SSL backends would be a breaking + change. An option should be added that can be used to disable the default + Schannel auto-send behavior. + + https://github.com/curl/curl/issues/2262 + +16. SASL + +16.1 Other authentication mechanisms + + Add support for other authentication mechanisms such as OLP, + GSS-SPNEGO and others. + +16.2 Add QOP support to GSSAPI authentication + + Currently the GSSAPI authentication only supports the default QOP of auth + (Authentication), whilst Kerberos V5 supports both auth-int (Authentication + with integrity protection) and auth-conf (Authentication with integrity and + privacy protection). + +16.3 Support binary messages (i.e.: non-base64) + + Mandatory to support LDAP SASL authentication. + + +17. SSH protocols + +17.1 Multiplexing + + SSH is a perfectly fine multiplexed protocols which would allow libcurl to do + multiple parallel transfers from the same host using the same connection, + much in the same spirit as HTTP/2 does. libcurl however does not take + advantage of that ability but will instead always create a new connection for + new transfers even if an existing connection already exists to the host. + + To fix this, libcurl would have to detect an existing connection and "attach" + the new transfer to the existing one. + +17.3 Support better than MD5 hostkey hash + + libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the + server's key. MD5 is generally being deprecated so we should implement + support for stronger hashing algorithms. libssh2 itself is what provides this + underlying functionality and it supports at least SHA-1 as an alternative. + SHA-1 is also being deprecated these days so we should consider working with + libssh2 to instead offer support for SHA-256 or similar. + +17.4 Support CURLOPT_PREQUOTE + + The two other QUOTE options are supported for SFTP, but this was left out for + unknown reasons! + +18. Command line tool + +18.1 sync + + "curl --sync http://example.com/feed[1-100].rss" or + "curl --sync http://example.net/{index,calendar,history}.html" + + Downloads a range or set of URLs using the remote name, but only if the + remote file is newer than the local file. A Last-Modified HTTP date header + should also be used to set the mod date on the downloaded file. + +18.2 glob posts + + Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'. + This is easily scripted though. + +18.3 prevent file overwriting + + Add an option that prevents curl from overwriting existing local files. When + used, and there already is an existing file with the target file name + (either -O or -o), a number should be appended (and increased if already + existing). So that index.html becomes first index.html.1 and then + index.html.2 etc. + +18.5 UTF-8 filenames in Content-Disposition + + RFC 6266 documents how UTF-8 names can be passed to a client in the + Content-Disposition header, and curl does not support this. + + https://github.com/curl/curl/issues/1888 + +18.7 at least N milliseconds between requests + + Allow curl command lines issue a lot of request against services that limit + users to no more than N requests/second or similar. Could be implemented with + an option asking that at least a certain time has elapsed since the previous + request before the next one will be performed. Example: + + $ curl "https://example.com/api?input=[1-1000]" -d yadayada --after 500 + + See https://github.com/curl/curl/issues/3920 + +18.9 Choose the name of file in braces for complex URLs + + When using braces to download a list of URLs and you use complicated names + in the list of alternatives, it could be handy to allow curl to use other + names when saving. + + Consider a way to offer that. Possibly like + {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the + colon is the output name. + + See https://github.com/curl/curl/issues/221 + +18.10 improve how curl works in a windows console window + + If you pull the scrollbar when transferring with curl in a Windows console + window, the transfer is interrupted and can get disconnected. This can + probably be improved. See https://github.com/curl/curl/issues/322 + +18.11 Windows: set attribute 'archive' for completed downloads + + The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be + backed up from those that are either not ready or have not changed. + + Downloads in progress are neither ready to be backed up, nor should they be + opened by a different process. Only after a download has been completed it's + sensible to include it in any integer snapshot or backup of the system. + + See https://github.com/curl/curl/issues/3354 + +18.12 keep running, read instructions from pipe/socket + + Provide an option that makes curl not exit after the last URL (or even work + without a given URL), and then make it read instructions passed on a pipe or + over a socket to make further instructions so that a second subsequent curl + invoke can talk to the still running instance and ask for transfers to get + done, and thus maintain its connection pool, DNS cache and more. + +18.15 --retry should resume + + When --retry is used and curl actually retries transfer, it should use the + already transferred data and do a resumed transfer for the rest (when + possible) so that it doesn't have to transfer the same data again that was + already transferred before the retry. + + See https://github.com/curl/curl/issues/1084 + +18.16 send only part of --data + + When the user only wants to send a small piece of the data provided with + --data or --data-binary, like when that data is a huge file, consider a way + to specify that curl should only send a piece of that. One suggested syntax + would be: "--data-binary @largefile.zip!1073741823-2147483647". + + See https://github.com/curl/curl/issues/1200 + +18.17 consider file name from the redirected URL with -O ? + + When a user gives a URL and uses -O, and curl follows a redirect to a new + URL, the file name is not extracted and used from the newly redirected-to URL + even if the new URL may have a much more sensible file name. + + This is clearly documented and helps for security since there's no surprise + to users which file name that might get overwritten. But maybe a new option + could allow for this or maybe -J should imply such a treatment as well as -J + already allows for the server to decide what file name to use so it already + provides the "may overwrite any file" risk. + + This is extra tricky if the original URL has no file name part at all since + then the current code path will error out with an error message, and we can't + *know* already at that point if curl will be redirected to a URL that has a + file name... + + See https://github.com/curl/curl/issues/1241 + +18.18 retry on network is unreachable + + The --retry option retries transfers on "transient failures". We later added + --retry-connrefused to also retry for "connection refused" errors. + + Suggestions have been brought to also allow retry on "network is unreachable" + errors and while totally reasonable, maybe we should consider a way to make + this more configurable than to add a new option for every new error people + want to retry for? + + https://github.com/curl/curl/issues/1603 + +18.19 expand ~/ in config files + + For example .curlrc could benefit from being able to do this. + + See https://github.com/curl/curl/issues/2317 + +18.20 host name sections in config files + + config files would be more powerful if they could set different + configurations depending on used URLs, host name or possibly origin. Then a + default .curlrc could a specific user-agent only when doing requests against + a certain site. + + +19. Build + +19.1 roffit + + Consider extending 'roffit' to produce decent ASCII output, and use that + instead of (g)nroff when building src/tool_hugehelp.c + +19.2 Enable PIE and RELRO by default + + Especially when having programs that execute curl via the command line, PIE + renders the exploitation of memory corruption vulnerabilities a lot more + difficult. This can be attributed to the additional information leaks being + required to conduct a successful attack. RELRO, on the other hand, masks + different binary sections like the GOT as read-only and thus kills a handful + of techniques that come in handy when attackers are able to arbitrarily + overwrite memory. A few tests showed that enabling these features had close + to no impact, neither on the performance nor on the general functionality of + curl. + +19.3 cmake test suite improvements + + The cmake build doesn't support 'make show' so it doesn't know which tests + are in the makefile or not (making appveyor builds do many false warnings + about it) nor does it support running the test suite if building out-of-tree. + + See https://github.com/curl/curl/issues/3109 + +20. Test suite + +20.1 SSL tunnel + + Make our own version of stunnel for simple port forwarding to enable HTTPS + and FTP-SSL tests without the stunnel dependency, and it could allow us to + provide test tools built with either OpenSSL or GnuTLS + +20.2 nicer lacking perl message + + If perl wasn't found by the configure script, don't attempt to run the tests + but explain something nice why it doesn't. + +20.3 more protocols supported + + Extend the test suite to include more protocols. The telnet could just do FTP + or http operations (for which we have test servers). + +20.4 more platforms supported + + Make the test suite work on more platforms. OpenBSD and Mac OS. Remove + fork()s and it should become even more portable. + +20.5 Add support for concurrent connections + + Tests 836, 882 and 938 were designed to verify that separate connections + aren't used when using different login credentials in protocols that + shouldn't re-use a connection under such circumstances. + + Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent + connections. The read while() loop seems to loop until it receives a + disconnect from the client, where it then enters the waiting for connections + loop. When the client opens a second connection to the server, the first + connection hasn't been dropped (unless it has been forced - which we + shouldn't do in these tests) and thus the wait for connections loop is never + entered to receive the second connection. + +20.6 Use the RFC6265 test suite + + A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at + https://github.com/abarth/http-state/tree/master/tests + + It'd be really awesome if someone would write a script/setup that would run + curl with that test suite and detect deviances. Ideally, that would even be + incorporated into our regular test suite. + +20.7 Support LD_PRELOAD on macOS + + LD_RELOAD doesn't work on macOS, but there are tests which require it to run + properly. Look into making the preload support in runtests.pl portable such + that it uses DYLD_INSERT_LIBRARIES on macOS. + +21. Next SONAME bump + +21.1 http-style HEAD output for FTP + + #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers + from being output in NOBODY requests over FTP + +21.2 combine error codes + + Combine some of the error codes to remove duplicates. The original + numbering should not be changed, and the old identifiers would be + macroed to the new ones in an CURL_NO_OLDIES section to help with + backward compatibility. + + Candidates for removal and their replacements: + + CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND + + CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND + + CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR + + CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT + + CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT + + CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL + + CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND + + CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED + +21.3 extend CURLOPT_SOCKOPTFUNCTION prototype + + The current prototype only provides 'purpose' that tells what the + connection/socket is for, but not any protocol or similar. It makes it hard + for applications to differentiate on TCP vs UDP and even HTTP vs FTP and + similar. + +22. Next major release + +22.1 cleanup return codes + + curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a + CURLMcode. These should be changed to be the same. + +22.2 remove obsolete defines + + remove obsolete defines from curl/curl.h + +22.3 size_t + + make several functions use size_t instead of int in their APIs + +22.4 remove several functions + + remove the following functions from the public API: + + curl_getenv + + curl_mprintf (and variations) + + curl_strequal + + curl_strnequal + + They will instead become curlx_ - alternatives. That makes the curl app + still capable of using them, by building with them from source. + + These functions have no purpose anymore: + + curl_multi_socket + + curl_multi_socket_all + +22.5 remove CURLOPT_FAILONERROR + + Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird + internally. Let the app judge success or not for itself. + +22.7 remove progress meter from libcurl + + The internally provided progress meter output doesn't belong in the library. + Basically no application wants it (apart from curl) but instead applications + can and should do their own progress meters using the progress callback. + + The progress callback should then be bumped as well to get proper 64bit + variable types passed to it instead of doubles so that big files work + correctly. + +22.8 remove 'curl_httppost' from public + + curl_formadd() was made to fill in a public struct, but the fact that the + struct is public is never really used by application for their own advantage + but instead often restricts how the form functions can or can't be modified. + + Changing them to return a private handle will benefit the implementation and + allow us much greater freedoms while still maintaining a solid API and ABI.
