+License: MIT
+Project-URL: Changelog, https://github.com/jawah/charset_normalizer/blob/master/CHANGELOG.md
+Project-URL: Documentation, https://charset-normalizer.readthedocs.io/
+Project-URL: Code, https://github.com/jawah/charset_normalizer
+Project-URL: Issue tracker, https://github.com/jawah/charset_normalizer/issues
+Keywords: encoding,charset,charset-detector,detector,normalization,unicode,chardet,detect
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
+Classifier: Programming Language :: Python :: 3.12
+Classifier: Programming Language :: Python :: 3.13
+Classifier: Programming Language :: Python :: 3.14
+Classifier: Programming Language :: Python :: 3 :: Only
+Classifier: Programming Language :: Python :: Implementation :: CPython
+Classifier: Programming Language :: Python :: Implementation :: PyPy
+Classifier: Topic :: Text Processing :: Linguistic
+Classifier: Topic :: Utilities
+Classifier: Typing :: Typed
+Requires-Python: >=3.7
+Description-Content-Type: text/markdown
+License-File: LICENSE
+Provides-Extra: unicode-backport
+Dynamic: license-file
+
+Charset Detection, for Everyone 👋
+
+
+ The Real First Universal Charset Detector
+
+
+
+
+
+
+
+
+
+
+
+ Featured Packages
+
+
+
+
+
+
+
+
+ In other language (unofficial port - by the community)
+
+
+
+
+
+> A library that helps you read text from an unknown charset encoding.
Motivated by `chardet`,
+> I'm trying to resolve the issue by taking a new approach.
+> All IANA character set names for which the Python core library provides codecs are supported.
+
+
+ >>>>> 👉 Try Me Online Now, Then Adopt Me 👈 <<<<<
+
+
+This project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**.
+
+| Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) |
+|--------------------------------------------------|:---------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:-----------------------------------------------:|
+| `Fast` | ❌ | ✅ | ✅ |
+| `Universal**` | ❌ | ✅ | ❌ |
+| `Reliable` **without** distinguishable standards | ❌ | ✅ | ✅ |
+| `Reliable` **with** distinguishable standards | ✅ | ✅ | ✅ |
+| `License` | LGPL-2.1
_restrictive_ | MIT | MPL-1.1
_restrictive_ |
+| `Native Python` | ✅ | ✅ | ❌ |
+| `Detect spoken language` | ❌ | ✅ | N/A |
+| `UnicodeDecodeError Safety` | ❌ | ✅ | ❌ |
+| `Whl Size (min)` | 193.6 kB | 42 kB | ~200 kB |
+| `Supported Encoding` | 33 | 🎉 [99](https://charset-normalizer.readthedocs.io/en/latest/user/support.html#supported-encodings) | 40 |
+
+
+
+
+
+*\*\* : They are clearly using specific code for a specific encoding even if covering most of used one*
+
+## ⚡ Performance
+
+This package offer better performance than its counterpart Chardet. Here are some numbers.
+
+| Package | Accuracy | Mean per file (ms) | File per sec (est) |
+|-----------------------------------------------|:--------:|:------------------:|:------------------:|
+| [chardet](https://github.com/chardet/chardet) | 86 % | 63 ms | 16 file/sec |
+| charset-normalizer | **98 %** | **10 ms** | 100 file/sec |
+
+| Package | 99th percentile | 95th percentile | 50th percentile |
+|-----------------------------------------------|:---------------:|:---------------:|:---------------:|
+| [chardet](https://github.com/chardet/chardet) | 265 ms | 71 ms | 7 ms |
+| charset-normalizer | 100 ms | 50 ms | 5 ms |
+
+_updated as of december 2024 using CPython 3.12_
+
+Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.
+
+> Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows.
+> And yes, these results might change at any time. The dataset can be updated to include more files.
+> The actual delays heavily depends on your CPU capabilities. The factors should remain the same.
+> Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability
+> (e.g. Supported Encoding) Challenge-them if you want.
+
+## ✨ Installation
+
+Using pip:
+
+```sh
+pip install charset-normalizer -U
+```
+
+## 🚀 Basic Usage
+
+### CLI
+This package comes with a CLI.
+
+```
+usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]
+ file [file ...]
+
+The Real First Universal Charset Detector. Discover originating encoding used
+on text file. Normalize text to unicode.
+
+positional arguments:
+ files File(s) to be analysed
+
+optional arguments:
+ -h, --help show this help message and exit
+ -v, --verbose Display complementary information about file if any.
+ Stdout will contain logs about the detection process.
+ -a, --with-alternative
+ Output complementary possibilities if any. Top-level
+ JSON WILL be a list.
+ -n, --normalize Permit to normalize input file. If not set, program
+ does not write anything.
+ -m, --minimal Only output the charset detected to STDOUT. Disabling
+ JSON output.
+ -r, --replace Replace file when trying to normalize it instead of
+ creating a new one.
+ -f, --force Replace file without asking if you are sure, use this
+ flag with caution.
+ -t THRESHOLD, --threshold THRESHOLD
+ Define a custom maximum amount of chaos allowed in
+ decoded content. 0. <= chaos <= 1.
+ --version Show version information and exit.
+```
+
+```bash
+normalizer ./data/sample.1.fr.srt
+```
+
+or
+
+```bash
+python -m charset_normalizer ./data/sample.1.fr.srt
+```
+
+🎉 Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.
+
+```json
+{
+ "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
+ "encoding": "cp1252",
+ "encoding_aliases": [
+ "1252",
+ "windows_1252"
+ ],
+ "alternative_encodings": [
+ "cp1254",
+ "cp1256",
+ "cp1258",
+ "iso8859_14",
+ "iso8859_15",
+ "iso8859_16",
+ "iso8859_3",
+ "iso8859_9",
+ "latin_1",
+ "mbcs"
+ ],
+ "language": "French",
+ "alphabets": [
+ "Basic Latin",
+ "Latin-1 Supplement"
+ ],
+ "has_sig_or_bom": false,
+ "chaos": 0.149,
+ "coherence": 97.152,
+ "unicode_path": null,
+ "is_preferred": true
+}
+```
+
+### Python
+*Just print out normalized text*
+```python
+from charset_normalizer import from_path
+
+results = from_path('./my_subtitle.srt')
+
+print(str(results.best()))
+```
+
+*Upgrade your code without effort*
+```python
+from charset_normalizer import detect
+```
+
+The above code will behave the same as **chardet**. We ensure that we offer the best (reasonable) BC result possible.
+
+See the docs for advanced usage : [readthedocs.io](https://charset-normalizer.readthedocs.io/en/latest/)
+
+## 😇 Why
+
+When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a
+reliable alternative using a completely different method. Also! I never back down on a good challenge!
+
+I **don't care** about the **originating charset** encoding, because **two different tables** can
+produce **two identical rendered string.**
+What I want is to get readable text, the best I can.
+
+In a way, **I'm brute forcing text decoding.** How cool is that ? 😎
+
+Don't confuse package **ftfy** with charset-normalizer or chardet. ftfy goal is to repair Unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.
+
+## 🍰 How
+
+ - Discard all charset encoding table that could not fit the binary content.
+ - Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding.
+ - Extract matches with the lowest mess detected.
+ - Additionally, we measure coherence / probe for a language.
+
+**Wait a minute**, what is noise/mess and coherence according to **YOU ?**
+
+*Noise :* I opened hundred of text files, **written by humans**, with the wrong encoding table. **I observed**, then
+**I established** some ground rules about **what is obvious** when **it seems like** a mess (aka. defining noise in rendered text).
+ I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to
+ improve or rewrite it.
+
+*Coherence :* For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought
+that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.
+
+## ⚡ Known limitations
+
+ - Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))
+ - Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.
+
+## ⚠️ About Python EOLs
+
+**If you are running:**
+
+- Python >=2.7,<3.5: Unsupported
+- Python 3.5: charset-normalizer < 2.1
+- Python 3.6: charset-normalizer < 3.1
+- Python 3.7: charset-normalizer < 4.0
+
+Upgrade your Python interpreter as soon as possible.
+
+## 👤 Contributing
+
+Contributions, issues and feature requests are very much welcome.
+Feel free to check [issues page](https://github.com/ousret/charset_normalizer/issues) if you want to contribute.
+
+## 📝 License
+
+Copyright © [Ahmed TAHRI @Ousret](https://github.com/Ousret).
+This project is [MIT](https://github.com/Ousret/charset_normalizer/blob/master/LICENSE) licensed.
+
+Characters frequencies used in this project © 2012 [Denny Vrandečić](http://simia.net/letters/)
+
+## 💼 For Enterprise
+
+Professional support for charset-normalizer is available as part of the [Tidelift
+Subscription][1]. Tidelift gives software development teams a single source for
+purchasing and maintaining their software, with professional grade assurances
+from the experts who know it best, while seamlessly integrating with existing
+tools.
+
+[1]: https://tidelift.com/subscription/pkg/pypi-charset-normalizer?utm_source=pypi-charset-normalizer&utm_medium=readme
+
+[](https://www.bestpractices.dev/projects/7297)
+
+# Changelog
+All notable changes to charset-normalizer will be documented in this file. This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
+
+## [3.4.3](https://github.com/Ousret/charset_normalizer/compare/3.4.2...3.4.3) (2025-08-09)
+
+### Changed
+- mypy(c) is no longer a required dependency at build time if `CHARSET_NORMALIZER_USE_MYPYC` isn't set to `1`. (#595) (#583)
+- automatically lower confidence on small bytes samples that are not Unicode in `detect` output legacy function. (#391)
+
+### Added
+- Custom build backend to overcome inability to mark mypy as an optional dependency in the build phase.
+- Support for Python 3.14
+
+### Fixed
+- sdist archive contained useless directories.
+- automatically fallback on valid UTF-16 or UTF-32 even if the md says it's noisy. (#633)
+
+### Misc
+- SBOM are automatically published to the relevant GitHub release to comply with regulatory changes.
+ Each published wheel comes with its SBOM. We choose CycloneDX as the format.
+- Prebuilt optimized wheel are no longer distributed by default for CPython 3.7 due to a change in cibuildwheel.
+
+## [3.4.2](https://github.com/Ousret/charset_normalizer/compare/3.4.1...3.4.2) (2025-05-02)
+
+### Fixed
+- Addressed the DeprecationWarning in our CLI regarding `argparse.FileType` by backporting the target class into the package. (#591)
+- Improved the overall reliability of the detector with CJK Ideographs. (#605) (#587)
+
+### Changed
+- Optional mypyc compilation upgraded to version 1.15 for Python >= 3.8
+
+## [3.4.1](https://github.com/Ousret/charset_normalizer/compare/3.4.0...3.4.1) (2024-12-24)
+
+### Changed
+- Project metadata are now stored using `pyproject.toml` instead of `setup.cfg` using setuptools as the build backend.
+- Enforce annotation delayed loading for a simpler and consistent types in the project.
+- Optional mypyc compilation upgraded to version 1.14 for Python >= 3.8
+
+### Added
+- pre-commit configuration.
+- noxfile.
+
+### Removed
+- `build-requirements.txt` as per using `pyproject.toml` native build configuration.
+- `bin/integration.py` and `bin/serve.py` in favor of downstream integration test (see noxfile).
+- `setup.cfg` in favor of `pyproject.toml` metadata configuration.
+- Unused `utils.range_scan` function.
+
+### Fixed
+- Converting content to Unicode bytes may insert `utf_8` instead of preferred `utf-8`. (#572)
+- Deprecation warning "'count' is passed as positional argument" when converting to Unicode bytes on Python 3.13+
+
+## [3.4.0](https://github.com/Ousret/charset_normalizer/compare/3.3.2...3.4.0) (2024-10-08)
+
+### Added
+- Argument `--no-preemptive` in the CLI to prevent the detector to search for hints.
+- Support for Python 3.13 (#512)
+
+### Fixed
+- Relax the TypeError exception thrown when trying to compare a CharsetMatch with anything else than a CharsetMatch.
+- Improved the general reliability of the detector based on user feedbacks. (#520) (#509) (#498) (#407) (#537)
+- Declared charset in content (preemptive detection) not changed when converting to utf-8 bytes. (#381)
+
+## [3.3.2](https://github.com/Ousret/charset_normalizer/compare/3.3.1...3.3.2) (2023-10-31)
+
+### Fixed
+- Unintentional memory usage regression when using large payload that match several encoding (#376)
+- Regression on some detection case showcased in the documentation (#371)
+
+### Added
+- Noise (md) probe that identify malformed arabic representation due to the presence of letters in isolated form (credit to my wife)
+
+## [3.3.1](https://github.com/Ousret/charset_normalizer/compare/3.3.0...3.3.1) (2023-10-22)
+
+### Changed
+- Optional mypyc compilation upgraded to version 1.6.1 for Python >= 3.8
+- Improved the general detection reliability based on reports from the community
+
+## [3.3.0](https://github.com/Ousret/charset_normalizer/compare/3.2.0...3.3.0) (2023-09-30)
+
+### Added
+- Allow to execute the CLI (e.g. normalizer) through `python -m charset_normalizer.cli` or `python -m charset_normalizer`
+- Support for 9 forgotten encoding that are supported by Python but unlisted in `encoding.aliases` as they have no alias (#323)
+
+### Removed
+- (internal) Redundant utils.is_ascii function and unused function is_private_use_only
+- (internal) charset_normalizer.assets is moved inside charset_normalizer.constant
+
+### Changed
+- (internal) Unicode code blocks in constants are updated using the latest v15.0.0 definition to improve detection
+- Optional mypyc compilation upgraded to version 1.5.1 for Python >= 3.8
+
+### Fixed
+- Unable to properly sort CharsetMatch when both chaos/noise and coherence were close due to an unreachable condition in \_\_lt\_\_ (#350)
+
+## [3.2.0](https://github.com/Ousret/charset_normalizer/compare/3.1.0...3.2.0) (2023-06-07)
+
+### Changed
+- Typehint for function `from_path` no longer enforce `PathLike` as its first argument
+- Minor improvement over the global detection reliability
+
+### Added
+- Introduce function `is_binary` that relies on main capabilities, and optimized to detect binaries
+- Propagate `enable_fallback` argument throughout `from_bytes`, `from_path`, and `from_fp` that allow a deeper control over the detection (default True)
+- Explicit support for Python 3.12
+
+### Fixed
+- Edge case detection failure where a file would contain 'very-long' camel cased word (Issue #289)
+
+## [3.1.0](https://github.com/Ousret/charset_normalizer/compare/3.0.1...3.1.0) (2023-03-06)
+
+### Added
+- Argument `should_rename_legacy` for legacy function `detect` and disregard any new arguments without errors (PR #262)
+
+### Removed
+- Support for Python 3.6 (PR #260)
+
+### Changed
+- Optional speedup provided by mypy/c 1.0.1
+
+## [3.0.1](https://github.com/Ousret/charset_normalizer/compare/3.0.0...3.0.1) (2022-11-18)
+
+### Fixed
+- Multi-bytes cutter/chunk generator did not always cut correctly (PR #233)
+
+### Changed
+- Speedup provided by mypy/c 0.990 on Python >= 3.7
+
+## [3.0.0](https://github.com/Ousret/charset_normalizer/compare/2.1.1...3.0.0) (2022-10-20)
+
+### Added
+- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results
+- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES
+- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio
+- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)
+
+### Changed
+- Build with static metadata using 'build' frontend
+- Make the language detection stricter
+- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1
+
+### Fixed
+- CLI with opt --normalize fail when using full path for files
+- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it
+- Sphinx warnings when generating the documentation
+
+### Removed
+- Coherence detector no longer return 'Simple English' instead return 'English'
+- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'
+- Breaking: Method `first()` and `best()` from CharsetMatch
+- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)
+- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches
+- Breaking: Top-level function `normalize`
+- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch
+- Support for the backport `unicodedata2`
+
+## [3.0.0rc1](https://github.com/Ousret/charset_normalizer/compare/3.0.0b2...3.0.0rc1) (2022-10-18)
+
+### Added
+- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results
+- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES
+- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio
+
+### Changed
+- Build with static metadata using 'build' frontend
+- Make the language detection stricter
+
+### Fixed
+- CLI with opt --normalize fail when using full path for files
+- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it
+
+### Removed
+- Coherence detector no longer return 'Simple English' instead return 'English'
+- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'
+
+## [3.0.0b2](https://github.com/Ousret/charset_normalizer/compare/3.0.0b1...3.0.0b2) (2022-08-21)
+
+### Added
+- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)
+
+### Removed
+- Breaking: Method `first()` and `best()` from CharsetMatch
+- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)
+
+### Fixed
+- Sphinx warnings when generating the documentation
+
+## [3.0.0b1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...3.0.0b1) (2022-08-15)
+
+### Changed
+- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1
+
+### Removed
+- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches
+- Breaking: Top-level function `normalize`
+- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch
+- Support for the backport `unicodedata2`
+
+## [2.1.1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...2.1.1) (2022-08-19)
+
+### Deprecated
+- Function `normalize` scheduled for removal in 3.0
+
+### Changed
+- Removed useless call to decode in fn is_unprintable (#206)
+
+### Fixed
+- Third-party library (i18n xgettext) crashing not recognizing utf_8 (PEP 263) with underscore from [@aleksandernovikov](https://github.com/aleksandernovikov) (#204)
+
+## [2.1.0](https://github.com/Ousret/charset_normalizer/compare/2.0.12...2.1.0) (2022-06-19)
+
+### Added
+- Output the Unicode table version when running the CLI with `--version` (PR #194)
+
+### Changed
+- Re-use decoded buffer for single byte character sets from [@nijel](https://github.com/nijel) (PR #175)
+- Fixing some performance bottlenecks from [@deedy5](https://github.com/deedy5) (PR #183)
+
+### Fixed
+- Workaround potential bug in cpython with Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space (PR #175)
+- CLI default threshold aligned with the API threshold from [@oleksandr-kuzmenko](https://github.com/oleksandr-kuzmenko) (PR #181)
+
+### Removed
+- Support for Python 3.5 (PR #192)
+
+### Deprecated
+- Use of backport unicodedata from `unicodedata2` as Python is quickly catching up, scheduled for removal in 3.0 (PR #194)
+
+## [2.0.12](https://github.com/Ousret/charset_normalizer/compare/2.0.11...2.0.12) (2022-02-12)
+
+### Fixed
+- ASCII miss-detection on rare cases (PR #170)
+
+## [2.0.11](https://github.com/Ousret/charset_normalizer/compare/2.0.10...2.0.11) (2022-01-30)
+
+### Added
+- Explicit support for Python 3.11 (PR #164)
+
+### Changed
+- The logging behavior have been completely reviewed, now using only TRACE and DEBUG levels (PR #163 #165)
+
+## [2.0.10](https://github.com/Ousret/charset_normalizer/compare/2.0.9...2.0.10) (2022-01-04)
+
+### Fixed
+- Fallback match entries might lead to UnicodeDecodeError for large bytes sequence (PR #154)
+
+### Changed
+- Skipping the language-detection (CD) on ASCII (PR #155)
+
+## [2.0.9](https://github.com/Ousret/charset_normalizer/compare/2.0.8...2.0.9) (2021-12-03)
+
+### Changed
+- Moderating the logging impact (since 2.0.8) for specific environments (PR #147)
+
+### Fixed
+- Wrong logging level applied when setting kwarg `explain` to True (PR #146)
+
+## [2.0.8](https://github.com/Ousret/charset_normalizer/compare/2.0.7...2.0.8) (2021-11-24)
+### Changed
+- Improvement over Vietnamese detection (PR #126)
+- MD improvement on trailing data and long foreign (non-pure latin) data (PR #124)
+- Efficiency improvements in cd/alphabet_languages from [@adbar](https://github.com/adbar) (PR #122)
+- call sum() without an intermediary list following PEP 289 recommendations from [@adbar](https://github.com/adbar) (PR #129)
+- Code style as refactored by Sourcery-AI (PR #131)
+- Minor adjustment on the MD around european words (PR #133)
+- Remove and replace SRTs from assets / tests (PR #139)
+- Initialize the library logger with a `NullHandler` by default from [@nmaynes](https://github.com/nmaynes) (PR #135)
+- Setting kwarg `explain` to True will add provisionally (bounded to function lifespan) a specific stream handler (PR #135)
+
+### Fixed
+- Fix large (misleading) sequence giving UnicodeDecodeError (PR #137)
+- Avoid using too insignificant chunk (PR #137)
+
+### Added
+- Add and expose function `set_logging_handler` to configure a specific StreamHandler from [@nmaynes](https://github.com/nmaynes) (PR #135)
+- Add `CHANGELOG.md` entries, format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) (PR #141)
+
+## [2.0.7](https://github.com/Ousret/charset_normalizer/compare/2.0.6...2.0.7) (2021-10-11)
+### Added
+- Add support for Kazakh (Cyrillic) language detection (PR #109)
+
+### Changed
+- Further, improve inferring the language from a given single-byte code page (PR #112)
+- Vainly trying to leverage PEP263 when PEP3120 is not supported (PR #116)
+- Refactoring for potential performance improvements in loops from [@adbar](https://github.com/adbar) (PR #113)
+- Various detection improvement (MD+CD) (PR #117)
+
+### Removed
+- Remove redundant logging entry about detected language(s) (PR #115)
+
+### Fixed
+- Fix a minor inconsistency between Python 3.5 and other versions regarding language detection (PR #117 #102)
+
+## [2.0.6](https://github.com/Ousret/charset_normalizer/compare/2.0.5...2.0.6) (2021-09-18)
+### Fixed
+- Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x (PR #100)
+- Fix CLI crash when using --minimal output in certain cases (PR #103)
+
+### Changed
+- Minor improvement to the detection efficiency (less than 1%) (PR #106 #101)
+
+## [2.0.5](https://github.com/Ousret/charset_normalizer/compare/2.0.4...2.0.5) (2021-09-14)
+### Changed
+- The project now comply with: flake8, mypy, isort and black to ensure a better overall quality (PR #81)
+- The BC-support with v1.x was improved, the old staticmethods are restored (PR #82)
+- The Unicode detection is slightly improved (PR #93)
+- Add syntax sugar \_\_bool\_\_ for results CharsetMatches list-container (PR #91)
+
+### Removed
+- The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead (PR #92)
+
+### Fixed
+- In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection (PR #95)
+- Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection (PR #96)
+- The MANIFEST.in was not exhaustive (PR #78)
+
+## [2.0.4](https://github.com/Ousret/charset_normalizer/compare/2.0.3...2.0.4) (2021-07-30)
+### Fixed
+- The CLI no longer raise an unexpected exception when no encoding has been found (PR #70)
+- Fix accessing the 'alphabets' property when the payload contains surrogate characters (PR #68)
+- The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (PR #72)
+- Submatch factoring could be wrong in rare edge cases (PR #72)
+- Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (PR #72)
+- Fix line endings from CRLF to LF for certain project files (PR #67)
+
+### Changed
+- Adjust the MD to lower the sensitivity, thus improving the global detection reliability (PR #69 #76)
+- Allow fallback on specified encoding if any (PR #71)
+
+## [2.0.3](https://github.com/Ousret/charset_normalizer/compare/2.0.2...2.0.3) (2021-07-16)
+### Changed
+- Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. (PR #63)
+- According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. (PR #64)
+
+## [2.0.2](https://github.com/Ousret/charset_normalizer/compare/2.0.1...2.0.2) (2021-07-15)
+### Fixed
+- Empty/Too small JSON payload miss-detection fixed. Report from [@tseaver](https://github.com/tseaver) (PR #59)
+
+### Changed
+- Don't inject unicodedata2 into sys.modules from [@akx](https://github.com/akx) (PR #57)
+
+## [2.0.1](https://github.com/Ousret/charset_normalizer/compare/2.0.0...2.0.1) (2021-07-13)
+### Fixed
+- Make it work where there isn't a filesystem available, dropping assets frequencies.json. Report from [@sethmlarson](https://github.com/sethmlarson). (PR #55)
+- Using explain=False permanently disable the verbose output in the current runtime (PR #47)
+- One log entry (language target preemptive) was not show in logs when using explain=True (PR #47)
+- Fix undesired exception (ValueError) on getitem of instance CharsetMatches (PR #52)
+
+### Changed
+- Public function normalize default args values were not aligned with from_bytes (PR #53)
+
+### Added
+- You may now use charset aliases in cp_isolation and cp_exclusion arguments (PR #47)
+
+## [2.0.0](https://github.com/Ousret/charset_normalizer/compare/1.4.1...2.0.0) (2021-07-02)
+### Changed
+- 4x to 5 times faster than the previous 1.4.0 release. At least 2x faster than Chardet.
+- Accent has been made on UTF-8 detection, should perform rather instantaneous.
+- The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible.
+- The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time)
+- The program has been rewritten to ease the readability and maintainability. (+Using static typing)+
+- utf_7 detection has been reinstated.
+
+### Removed
+- This package no longer require anything when used with Python 3.5 (Dropped cached_property)
+- Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, Volapük, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian.
+- The exception hook on UnicodeDecodeError has been removed.
+
+### Deprecated
+- Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0
+
+### Fixed
+- The CLI output used the relative path of the file(s). Should be absolute.
+
+## [1.4.1](https://github.com/Ousret/charset_normalizer/compare/1.4.0...1.4.1) (2021-05-28)
+### Fixed
+- Logger configuration/usage no longer conflict with others (PR #44)
+
+## [1.4.0](https://github.com/Ousret/charset_normalizer/compare/1.3.9...1.4.0) (2021-05-21)
+### Removed
+- Using standard logging instead of using the package loguru.
+- Dropping nose test framework in favor of the maintained pytest.
+- Choose to not use dragonmapper package to help with gibberish Chinese/CJK text.
+- Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version.
+- Stop support for UTF-7 that does not contain a SIG.
+- Dropping PrettyTable, replaced with pure JSON output in CLI.
+
+### Fixed
+- BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process.
+- Not searching properly for the BOM when trying utf32/16 parent codec.
+
+### Changed
+- Improving the package final size by compressing frequencies.json.
+- Huge improvement over the larges payload.
+
+### Added
+- CLI now produces JSON consumable output.
+- Return ASCII if given sequences fit. Given reasonable confidence.
+
+## [1.3.9](https://github.com/Ousret/charset_normalizer/compare/1.3.8...1.3.9) (2021-05-13)
+
+### Fixed
+- In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload (PR #40)
+
+## [1.3.8](https://github.com/Ousret/charset_normalizer/compare/1.3.7...1.3.8) (2021-05-12)
+
+### Fixed
+- Empty given payload for detection may cause an exception if trying to access the `alphabets` property. (PR #39)
+
+## [1.3.7](https://github.com/Ousret/charset_normalizer/compare/1.3.6...1.3.7) (2021-05-12)
+
+### Fixed
+- The legacy detect function should return UTF-8-SIG if sig is present in the payload. (PR #38)
+
+## [1.3.6](https://github.com/Ousret/charset_normalizer/compare/1.3.5...1.3.6) (2021-02-09)
+
+### Changed
+- Amend the previous release to allow prettytable 2.0 (PR #35)
+
+## [1.3.5](https://github.com/Ousret/charset_normalizer/compare/1.3.4...1.3.5) (2021-02-08)
+
+### Fixed
+- Fix error while using the package with a python pre-release interpreter (PR #33)
+
+### Changed
+- Dependencies refactoring, constraints revised.
+
+### Added
+- Add python 3.9 and 3.10 to the supported interpreters
+
+MIT License
+
+Copyright (c) 2025 TAHRI Ahmed R.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/RECORD
new file mode 100644
index 0000000..151b492
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/RECORD
@@ -0,0 +1,35 @@
+../../../bin/normalizer,sha256=n3V02Bpo72_E9CJPUAvKEZEoRfPC5jueWpmsSbsI0WM,252
+charset_normalizer-3.4.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+charset_normalizer-3.4.3.dist-info/METADATA,sha256=nBNOskPUtcqHtaSPPaJafjXrlicPcPIgLFzpJQTgvaA,36700
+charset_normalizer-3.4.3.dist-info/RECORD,,
+charset_normalizer-3.4.3.dist-info/WHEEL,sha256=DxRnWQz-Kp9-4a4hdDHsSv0KUC3H7sN9Nbef3-8RjXU,190
+charset_normalizer-3.4.3.dist-info/entry_points.txt,sha256=ADSTKrkXZ3hhdOVFi6DcUEHQRS0xfxDIE_pEz4wLIXA,65
+charset_normalizer-3.4.3.dist-info/licenses/LICENSE,sha256=bQ1Bv-FwrGx9wkjJpj4lTQ-0WmDVCoJX0K-SxuJJuIc,1071
+charset_normalizer-3.4.3.dist-info/top_level.txt,sha256=7ASyzePr8_xuZWJsnqJjIBtyV8vhEo0wBCv1MPRRi3Q,19
+charset_normalizer/__init__.py,sha256=OKRxRv2Zhnqk00tqkN0c1BtJjm165fWXLydE52IKuHc,1590
+charset_normalizer/__main__.py,sha256=yzYxMR-IhKRHYwcSlavEv8oGdwxsR89mr2X09qXGdps,109
+charset_normalizer/__pycache__/__init__.cpython-312.pyc,,
+charset_normalizer/__pycache__/__main__.cpython-312.pyc,,
+charset_normalizer/__pycache__/api.cpython-312.pyc,,
+charset_normalizer/__pycache__/cd.cpython-312.pyc,,
+charset_normalizer/__pycache__/constant.cpython-312.pyc,,
+charset_normalizer/__pycache__/legacy.cpython-312.pyc,,
+charset_normalizer/__pycache__/md.cpython-312.pyc,,
+charset_normalizer/__pycache__/models.cpython-312.pyc,,
+charset_normalizer/__pycache__/utils.cpython-312.pyc,,
+charset_normalizer/__pycache__/version.cpython-312.pyc,,
+charset_normalizer/api.py,sha256=V07i8aVeCD8T2fSia3C-fn0i9t8qQguEBhsqszg32Ns,22668
+charset_normalizer/cd.py,sha256=WKTo1HDb-H9HfCDc3Bfwq5jzS25Ziy9SE2a74SgTq88,12522
+charset_normalizer/cli/__init__.py,sha256=D8I86lFk2-py45JvqxniTirSj_sFyE6sjaY_0-G1shc,136
+charset_normalizer/cli/__main__.py,sha256=dMaXG6IJXRvqq8z2tig7Qb83-BpWTln55ooiku5_uvg,12646
+charset_normalizer/cli/__pycache__/__init__.cpython-312.pyc,,
+charset_normalizer/cli/__pycache__/__main__.cpython-312.pyc,,
+charset_normalizer/constant.py,sha256=7UVY4ldYhmQMHUdgQ_sgZmzcQ0xxYxpBunqSZ-XJZ8U,42713
+charset_normalizer/legacy.py,sha256=sYBzSpzsRrg_wF4LP536pG64BItw7Tqtc3SMQAHvFLM,2731
+charset_normalizer/md.cpython-312-x86_64-linux-gnu.so,sha256=sZ7umtJLjKfA83NFJ7npkiDyr06zDT8cWtl6uIx2MsM,15912
+charset_normalizer/md.py,sha256=-_oN3h3_X99nkFfqamD3yu45DC_wfk5odH0Tr_CQiXs,20145
+charset_normalizer/md__mypyc.cpython-312-x86_64-linux-gnu.so,sha256=froFxeWX3QD-u-6lU-1gyOZmEo6pgm3i-HdUWz2J8ro,289536
+charset_normalizer/models.py,sha256=lKXhOnIPtiakbK3i__J9wpOfzx3JDTKj7Dn3Rg0VaRI,12394
+charset_normalizer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+charset_normalizer/utils.py,sha256=sTejPgrdlNsKNucZfJCxJ95lMTLA0ShHLLE3n5wpT9Q,12170
+charset_normalizer/version.py,sha256=hBN3id1io4HMVPtyDn9IIRVShbBM0kgVs3haVtppZOE,115
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/WHEEL
new file mode 100644
index 0000000..f3e8a97
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/WHEEL
@@ -0,0 +1,7 @@
+Wheel-Version: 1.0
+Generator: setuptools (80.9.0)
+Root-Is-Purelib: false
+Tag: cp312-cp312-manylinux_2_17_x86_64
+Tag: cp312-cp312-manylinux2014_x86_64
+Tag: cp312-cp312-manylinux_2_28_x86_64
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/entry_points.txt b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/entry_points.txt
new file mode 100644
index 0000000..65619e7
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/entry_points.txt
@@ -0,0 +1,2 @@
+[console_scripts]
+normalizer = charset_normalizer.cli:cli_detect
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/licenses/LICENSE b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/licenses/LICENSE
new file mode 100644
index 0000000..9725772
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/licenses/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2025 TAHRI Ahmed R.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/top_level.txt b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/top_level.txt
new file mode 100644
index 0000000..66958f0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer-3.4.3.dist-info/top_level.txt
@@ -0,0 +1 @@
+charset_normalizer
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__init__.py
new file mode 100644
index 0000000..0d3a379
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__init__.py
@@ -0,0 +1,48 @@
+"""
+Charset-Normalizer
+~~~~~~~~~~~~~~
+The Real First Universal Charset Detector.
+A library that helps you read text from an unknown charset encoding.
+Motivated by chardet, This package is trying to resolve the issue by taking a new approach.
+All IANA character set names for which the Python core library provides codecs are supported.
+
+Basic usage:
+ >>> from charset_normalizer import from_bytes
+ >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8'))
+ >>> best_guess = results.best()
+ >>> str(best_guess)
+ 'Bсеки човек има право на образование. Oбразованието!'
+
+Others methods and usages are available - see the full documentation
+at .
+:copyright: (c) 2021 by Ahmed TAHRI
+:license: MIT, see LICENSE for more details.
+"""
+
+from __future__ import annotations
+
+import logging
+
+from .api import from_bytes, from_fp, from_path, is_binary
+from .legacy import detect
+from .models import CharsetMatch, CharsetMatches
+from .utils import set_logging_handler
+from .version import VERSION, __version__
+
+__all__ = (
+ "from_fp",
+ "from_path",
+ "from_bytes",
+ "is_binary",
+ "detect",
+ "CharsetMatch",
+ "CharsetMatches",
+ "__version__",
+ "VERSION",
+ "set_logging_handler",
+)
+
+# Attach a NullHandler to the top level logger by default
+# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library
+
+logging.getLogger("charset_normalizer").addHandler(logging.NullHandler())
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__main__.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__main__.py
new file mode 100644
index 0000000..e0e76f7
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__main__.py
@@ -0,0 +1,6 @@
+from __future__ import annotations
+
+from .cli import cli_detect
+
+if __name__ == "__main__":
+ cli_detect()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..6d3729b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__main__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__main__.cpython-312.pyc
new file mode 100644
index 0000000..cf9c884
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/__main__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/api.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/api.cpython-312.pyc
new file mode 100644
index 0000000..71c48ab
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/api.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/cd.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/cd.cpython-312.pyc
new file mode 100644
index 0000000..0a8974e
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/cd.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/constant.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/constant.cpython-312.pyc
new file mode 100644
index 0000000..b507e20
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/constant.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/legacy.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/legacy.cpython-312.pyc
new file mode 100644
index 0000000..99fa0c5
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/legacy.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/md.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/md.cpython-312.pyc
new file mode 100644
index 0000000..b217b7c
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/md.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/models.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/models.cpython-312.pyc
new file mode 100644
index 0000000..5bd532b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/models.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/utils.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/utils.cpython-312.pyc
new file mode 100644
index 0000000..6303a7c
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/utils.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/version.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/version.cpython-312.pyc
new file mode 100644
index 0000000..932bf2d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/__pycache__/version.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/api.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/api.py
new file mode 100644
index 0000000..ebd9639
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/api.py
@@ -0,0 +1,669 @@
+from __future__ import annotations
+
+import logging
+from os import PathLike
+from typing import BinaryIO
+
+from .cd import (
+ coherence_ratio,
+ encoding_languages,
+ mb_encoding_languages,
+ merge_coherence_ratios,
+)
+from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE
+from .md import mess_ratio
+from .models import CharsetMatch, CharsetMatches
+from .utils import (
+ any_specified_encoding,
+ cut_sequence_chunks,
+ iana_name,
+ identify_sig_or_bom,
+ is_cp_similar,
+ is_multi_byte_encoding,
+ should_strip_sig_or_bom,
+)
+
+logger = logging.getLogger("charset_normalizer")
+explain_handler = logging.StreamHandler()
+explain_handler.setFormatter(
+ logging.Formatter("%(asctime)s | %(levelname)s | %(message)s")
+)
+
+
+def from_bytes(
+ sequences: bytes | bytearray,
+ steps: int = 5,
+ chunk_size: int = 512,
+ threshold: float = 0.2,
+ cp_isolation: list[str] | None = None,
+ cp_exclusion: list[str] | None = None,
+ preemptive_behaviour: bool = True,
+ explain: bool = False,
+ language_threshold: float = 0.1,
+ enable_fallback: bool = True,
+) -> CharsetMatches:
+ """
+ Given a raw bytes sequence, return the best possibles charset usable to render str objects.
+ If there is no results, it is a strong indicator that the source is binary/not text.
+ By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence.
+ And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will.
+
+ The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page
+ but never take it for granted. Can improve the performance.
+
+ You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that
+ purpose.
+
+ This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32.
+ By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain'
+ toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging.
+ Custom logging format and handler can be set manually.
+ """
+
+ if not isinstance(sequences, (bytearray, bytes)):
+ raise TypeError(
+ "Expected object of type bytes or bytearray, got: {}".format(
+ type(sequences)
+ )
+ )
+
+ if explain:
+ previous_logger_level: int = logger.level
+ logger.addHandler(explain_handler)
+ logger.setLevel(TRACE)
+
+ length: int = len(sequences)
+
+ if length == 0:
+ logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.")
+ if explain: # Defensive: ensure exit path clean handler
+ logger.removeHandler(explain_handler)
+ logger.setLevel(previous_logger_level or logging.WARNING)
+ return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")])
+
+ if cp_isolation is not None:
+ logger.log(
+ TRACE,
+ "cp_isolation is set. use this flag for debugging purpose. "
+ "limited list of encoding allowed : %s.",
+ ", ".join(cp_isolation),
+ )
+ cp_isolation = [iana_name(cp, False) for cp in cp_isolation]
+ else:
+ cp_isolation = []
+
+ if cp_exclusion is not None:
+ logger.log(
+ TRACE,
+ "cp_exclusion is set. use this flag for debugging purpose. "
+ "limited list of encoding excluded : %s.",
+ ", ".join(cp_exclusion),
+ )
+ cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion]
+ else:
+ cp_exclusion = []
+
+ if length <= (chunk_size * steps):
+ logger.log(
+ TRACE,
+ "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.",
+ steps,
+ chunk_size,
+ length,
+ )
+ steps = 1
+ chunk_size = length
+
+ if steps > 1 and length / steps < chunk_size:
+ chunk_size = int(length / steps)
+
+ is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE
+ is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE
+
+ if is_too_small_sequence:
+ logger.log(
+ TRACE,
+ "Trying to detect encoding from a tiny portion of ({}) byte(s).".format(
+ length
+ ),
+ )
+ elif is_too_large_sequence:
+ logger.log(
+ TRACE,
+ "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format(
+ length
+ ),
+ )
+
+ prioritized_encodings: list[str] = []
+
+ specified_encoding: str | None = (
+ any_specified_encoding(sequences) if preemptive_behaviour else None
+ )
+
+ if specified_encoding is not None:
+ prioritized_encodings.append(specified_encoding)
+ logger.log(
+ TRACE,
+ "Detected declarative mark in sequence. Priority +1 given for %s.",
+ specified_encoding,
+ )
+
+ tested: set[str] = set()
+ tested_but_hard_failure: list[str] = []
+ tested_but_soft_failure: list[str] = []
+
+ fallback_ascii: CharsetMatch | None = None
+ fallback_u8: CharsetMatch | None = None
+ fallback_specified: CharsetMatch | None = None
+
+ results: CharsetMatches = CharsetMatches()
+
+ early_stop_results: CharsetMatches = CharsetMatches()
+
+ sig_encoding, sig_payload = identify_sig_or_bom(sequences)
+
+ if sig_encoding is not None:
+ prioritized_encodings.append(sig_encoding)
+ logger.log(
+ TRACE,
+ "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.",
+ len(sig_payload),
+ sig_encoding,
+ )
+
+ prioritized_encodings.append("ascii")
+
+ if "utf_8" not in prioritized_encodings:
+ prioritized_encodings.append("utf_8")
+
+ for encoding_iana in prioritized_encodings + IANA_SUPPORTED:
+ if cp_isolation and encoding_iana not in cp_isolation:
+ continue
+
+ if cp_exclusion and encoding_iana in cp_exclusion:
+ continue
+
+ if encoding_iana in tested:
+ continue
+
+ tested.add(encoding_iana)
+
+ decoded_payload: str | None = None
+ bom_or_sig_available: bool = sig_encoding == encoding_iana
+ strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom(
+ encoding_iana
+ )
+
+ if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available:
+ logger.log(
+ TRACE,
+ "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.",
+ encoding_iana,
+ )
+ continue
+ if encoding_iana in {"utf_7"} and not bom_or_sig_available:
+ logger.log(
+ TRACE,
+ "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.",
+ encoding_iana,
+ )
+ continue
+
+ try:
+ is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana)
+ except (ModuleNotFoundError, ImportError):
+ logger.log(
+ TRACE,
+ "Encoding %s does not provide an IncrementalDecoder",
+ encoding_iana,
+ )
+ continue
+
+ try:
+ if is_too_large_sequence and is_multi_byte_decoder is False:
+ str(
+ (
+ sequences[: int(50e4)]
+ if strip_sig_or_bom is False
+ else sequences[len(sig_payload) : int(50e4)]
+ ),
+ encoding=encoding_iana,
+ )
+ else:
+ decoded_payload = str(
+ (
+ sequences
+ if strip_sig_or_bom is False
+ else sequences[len(sig_payload) :]
+ ),
+ encoding=encoding_iana,
+ )
+ except (UnicodeDecodeError, LookupError) as e:
+ if not isinstance(e, LookupError):
+ logger.log(
+ TRACE,
+ "Code page %s does not fit given bytes sequence at ALL. %s",
+ encoding_iana,
+ str(e),
+ )
+ tested_but_hard_failure.append(encoding_iana)
+ continue
+
+ similar_soft_failure_test: bool = False
+
+ for encoding_soft_failed in tested_but_soft_failure:
+ if is_cp_similar(encoding_iana, encoding_soft_failed):
+ similar_soft_failure_test = True
+ break
+
+ if similar_soft_failure_test:
+ logger.log(
+ TRACE,
+ "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!",
+ encoding_iana,
+ encoding_soft_failed,
+ )
+ continue
+
+ r_ = range(
+ 0 if not bom_or_sig_available else len(sig_payload),
+ length,
+ int(length / steps),
+ )
+
+ multi_byte_bonus: bool = (
+ is_multi_byte_decoder
+ and decoded_payload is not None
+ and len(decoded_payload) < length
+ )
+
+ if multi_byte_bonus:
+ logger.log(
+ TRACE,
+ "Code page %s is a multi byte encoding table and it appear that at least one character "
+ "was encoded using n-bytes.",
+ encoding_iana,
+ )
+
+ max_chunk_gave_up: int = int(len(r_) / 4)
+
+ max_chunk_gave_up = max(max_chunk_gave_up, 2)
+ early_stop_count: int = 0
+ lazy_str_hard_failure = False
+
+ md_chunks: list[str] = []
+ md_ratios = []
+
+ try:
+ for chunk in cut_sequence_chunks(
+ sequences,
+ encoding_iana,
+ r_,
+ chunk_size,
+ bom_or_sig_available,
+ strip_sig_or_bom,
+ sig_payload,
+ is_multi_byte_decoder,
+ decoded_payload,
+ ):
+ md_chunks.append(chunk)
+
+ md_ratios.append(
+ mess_ratio(
+ chunk,
+ threshold,
+ explain is True and 1 <= len(cp_isolation) <= 2,
+ )
+ )
+
+ if md_ratios[-1] >= threshold:
+ early_stop_count += 1
+
+ if (early_stop_count >= max_chunk_gave_up) or (
+ bom_or_sig_available and strip_sig_or_bom is False
+ ):
+ break
+ except (
+ UnicodeDecodeError
+ ) as e: # Lazy str loading may have missed something there
+ logger.log(
+ TRACE,
+ "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s",
+ encoding_iana,
+ str(e),
+ )
+ early_stop_count = max_chunk_gave_up
+ lazy_str_hard_failure = True
+
+ # We might want to check the sequence again with the whole content
+ # Only if initial MD tests passes
+ if (
+ not lazy_str_hard_failure
+ and is_too_large_sequence
+ and not is_multi_byte_decoder
+ ):
+ try:
+ sequences[int(50e3) :].decode(encoding_iana, errors="strict")
+ except UnicodeDecodeError as e:
+ logger.log(
+ TRACE,
+ "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s",
+ encoding_iana,
+ str(e),
+ )
+ tested_but_hard_failure.append(encoding_iana)
+ continue
+
+ mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0
+ if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up:
+ tested_but_soft_failure.append(encoding_iana)
+ logger.log(
+ TRACE,
+ "%s was excluded because of initial chaos probing. Gave up %i time(s). "
+ "Computed mean chaos is %f %%.",
+ encoding_iana,
+ early_stop_count,
+ round(mean_mess_ratio * 100, ndigits=3),
+ )
+ # Preparing those fallbacks in case we got nothing.
+ if (
+ enable_fallback
+ and encoding_iana
+ in ["ascii", "utf_8", specified_encoding, "utf_16", "utf_32"]
+ and not lazy_str_hard_failure
+ ):
+ fallback_entry = CharsetMatch(
+ sequences,
+ encoding_iana,
+ threshold,
+ bom_or_sig_available,
+ [],
+ decoded_payload,
+ preemptive_declaration=specified_encoding,
+ )
+ if encoding_iana == specified_encoding:
+ fallback_specified = fallback_entry
+ elif encoding_iana == "ascii":
+ fallback_ascii = fallback_entry
+ else:
+ fallback_u8 = fallback_entry
+ continue
+
+ logger.log(
+ TRACE,
+ "%s passed initial chaos probing. Mean measured chaos is %f %%",
+ encoding_iana,
+ round(mean_mess_ratio * 100, ndigits=3),
+ )
+
+ if not is_multi_byte_decoder:
+ target_languages: list[str] = encoding_languages(encoding_iana)
+ else:
+ target_languages = mb_encoding_languages(encoding_iana)
+
+ if target_languages:
+ logger.log(
+ TRACE,
+ "{} should target any language(s) of {}".format(
+ encoding_iana, str(target_languages)
+ ),
+ )
+
+ cd_ratios = []
+
+ # We shall skip the CD when its about ASCII
+ # Most of the time its not relevant to run "language-detection" on it.
+ if encoding_iana != "ascii":
+ for chunk in md_chunks:
+ chunk_languages = coherence_ratio(
+ chunk,
+ language_threshold,
+ ",".join(target_languages) if target_languages else None,
+ )
+
+ cd_ratios.append(chunk_languages)
+
+ cd_ratios_merged = merge_coherence_ratios(cd_ratios)
+
+ if cd_ratios_merged:
+ logger.log(
+ TRACE,
+ "We detected language {} using {}".format(
+ cd_ratios_merged, encoding_iana
+ ),
+ )
+
+ current_match = CharsetMatch(
+ sequences,
+ encoding_iana,
+ mean_mess_ratio,
+ bom_or_sig_available,
+ cd_ratios_merged,
+ (
+ decoded_payload
+ if (
+ is_too_large_sequence is False
+ or encoding_iana in [specified_encoding, "ascii", "utf_8"]
+ )
+ else None
+ ),
+ preemptive_declaration=specified_encoding,
+ )
+
+ results.append(current_match)
+
+ if (
+ encoding_iana in [specified_encoding, "ascii", "utf_8"]
+ and mean_mess_ratio < 0.1
+ ):
+ # If md says nothing to worry about, then... stop immediately!
+ if mean_mess_ratio == 0.0:
+ logger.debug(
+ "Encoding detection: %s is most likely the one.",
+ current_match.encoding,
+ )
+ if explain: # Defensive: ensure exit path clean handler
+ logger.removeHandler(explain_handler)
+ logger.setLevel(previous_logger_level)
+ return CharsetMatches([current_match])
+
+ early_stop_results.append(current_match)
+
+ if (
+ len(early_stop_results)
+ and (specified_encoding is None or specified_encoding in tested)
+ and "ascii" in tested
+ and "utf_8" in tested
+ ):
+ probable_result: CharsetMatch = early_stop_results.best() # type: ignore[assignment]
+ logger.debug(
+ "Encoding detection: %s is most likely the one.",
+ probable_result.encoding,
+ )
+ if explain: # Defensive: ensure exit path clean handler
+ logger.removeHandler(explain_handler)
+ logger.setLevel(previous_logger_level)
+
+ return CharsetMatches([probable_result])
+
+ if encoding_iana == sig_encoding:
+ logger.debug(
+ "Encoding detection: %s is most likely the one as we detected a BOM or SIG within "
+ "the beginning of the sequence.",
+ encoding_iana,
+ )
+ if explain: # Defensive: ensure exit path clean handler
+ logger.removeHandler(explain_handler)
+ logger.setLevel(previous_logger_level)
+ return CharsetMatches([results[encoding_iana]])
+
+ if len(results) == 0:
+ if fallback_u8 or fallback_ascii or fallback_specified:
+ logger.log(
+ TRACE,
+ "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.",
+ )
+
+ if fallback_specified:
+ logger.debug(
+ "Encoding detection: %s will be used as a fallback match",
+ fallback_specified.encoding,
+ )
+ results.append(fallback_specified)
+ elif (
+ (fallback_u8 and fallback_ascii is None)
+ or (
+ fallback_u8
+ and fallback_ascii
+ and fallback_u8.fingerprint != fallback_ascii.fingerprint
+ )
+ or (fallback_u8 is not None)
+ ):
+ logger.debug("Encoding detection: utf_8 will be used as a fallback match")
+ results.append(fallback_u8)
+ elif fallback_ascii:
+ logger.debug("Encoding detection: ascii will be used as a fallback match")
+ results.append(fallback_ascii)
+
+ if results:
+ logger.debug(
+ "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.",
+ results.best().encoding, # type: ignore
+ len(results) - 1,
+ )
+ else:
+ logger.debug("Encoding detection: Unable to determine any suitable charset.")
+
+ if explain:
+ logger.removeHandler(explain_handler)
+ logger.setLevel(previous_logger_level)
+
+ return results
+
+
+def from_fp(
+ fp: BinaryIO,
+ steps: int = 5,
+ chunk_size: int = 512,
+ threshold: float = 0.20,
+ cp_isolation: list[str] | None = None,
+ cp_exclusion: list[str] | None = None,
+ preemptive_behaviour: bool = True,
+ explain: bool = False,
+ language_threshold: float = 0.1,
+ enable_fallback: bool = True,
+) -> CharsetMatches:
+ """
+ Same thing than the function from_bytes but using a file pointer that is already ready.
+ Will not close the file pointer.
+ """
+ return from_bytes(
+ fp.read(),
+ steps,
+ chunk_size,
+ threshold,
+ cp_isolation,
+ cp_exclusion,
+ preemptive_behaviour,
+ explain,
+ language_threshold,
+ enable_fallback,
+ )
+
+
+def from_path(
+ path: str | bytes | PathLike, # type: ignore[type-arg]
+ steps: int = 5,
+ chunk_size: int = 512,
+ threshold: float = 0.20,
+ cp_isolation: list[str] | None = None,
+ cp_exclusion: list[str] | None = None,
+ preemptive_behaviour: bool = True,
+ explain: bool = False,
+ language_threshold: float = 0.1,
+ enable_fallback: bool = True,
+) -> CharsetMatches:
+ """
+ Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode.
+ Can raise IOError.
+ """
+ with open(path, "rb") as fp:
+ return from_fp(
+ fp,
+ steps,
+ chunk_size,
+ threshold,
+ cp_isolation,
+ cp_exclusion,
+ preemptive_behaviour,
+ explain,
+ language_threshold,
+ enable_fallback,
+ )
+
+
+def is_binary(
+ fp_or_path_or_payload: PathLike | str | BinaryIO | bytes, # type: ignore[type-arg]
+ steps: int = 5,
+ chunk_size: int = 512,
+ threshold: float = 0.20,
+ cp_isolation: list[str] | None = None,
+ cp_exclusion: list[str] | None = None,
+ preemptive_behaviour: bool = True,
+ explain: bool = False,
+ language_threshold: float = 0.1,
+ enable_fallback: bool = False,
+) -> bool:
+ """
+ Detect if the given input (file, bytes, or path) points to a binary file. aka. not a string.
+ Based on the same main heuristic algorithms and default kwargs at the sole exception that fallbacks match
+ are disabled to be stricter around ASCII-compatible but unlikely to be a string.
+ """
+ if isinstance(fp_or_path_or_payload, (str, PathLike)):
+ guesses = from_path(
+ fp_or_path_or_payload,
+ steps=steps,
+ chunk_size=chunk_size,
+ threshold=threshold,
+ cp_isolation=cp_isolation,
+ cp_exclusion=cp_exclusion,
+ preemptive_behaviour=preemptive_behaviour,
+ explain=explain,
+ language_threshold=language_threshold,
+ enable_fallback=enable_fallback,
+ )
+ elif isinstance(
+ fp_or_path_or_payload,
+ (
+ bytes,
+ bytearray,
+ ),
+ ):
+ guesses = from_bytes(
+ fp_or_path_or_payload,
+ steps=steps,
+ chunk_size=chunk_size,
+ threshold=threshold,
+ cp_isolation=cp_isolation,
+ cp_exclusion=cp_exclusion,
+ preemptive_behaviour=preemptive_behaviour,
+ explain=explain,
+ language_threshold=language_threshold,
+ enable_fallback=enable_fallback,
+ )
+ else:
+ guesses = from_fp(
+ fp_or_path_or_payload,
+ steps=steps,
+ chunk_size=chunk_size,
+ threshold=threshold,
+ cp_isolation=cp_isolation,
+ cp_exclusion=cp_exclusion,
+ preemptive_behaviour=preemptive_behaviour,
+ explain=explain,
+ language_threshold=language_threshold,
+ enable_fallback=enable_fallback,
+ )
+
+ return not guesses
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cd.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cd.py
new file mode 100644
index 0000000..71a3ed5
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cd.py
@@ -0,0 +1,395 @@
+from __future__ import annotations
+
+import importlib
+from codecs import IncrementalDecoder
+from collections import Counter
+from functools import lru_cache
+from typing import Counter as TypeCounter
+
+from .constant import (
+ FREQUENCIES,
+ KO_NAMES,
+ LANGUAGE_SUPPORTED_COUNT,
+ TOO_SMALL_SEQUENCE,
+ ZH_NAMES,
+)
+from .md import is_suspiciously_successive_range
+from .models import CoherenceMatches
+from .utils import (
+ is_accentuated,
+ is_latin,
+ is_multi_byte_encoding,
+ is_unicode_range_secondary,
+ unicode_range,
+)
+
+
+def encoding_unicode_range(iana_name: str) -> list[str]:
+ """
+ Return associated unicode ranges in a single byte code page.
+ """
+ if is_multi_byte_encoding(iana_name):
+ raise OSError("Function not supported on multi-byte code page")
+
+ decoder = importlib.import_module(f"encodings.{iana_name}").IncrementalDecoder
+
+ p: IncrementalDecoder = decoder(errors="ignore")
+ seen_ranges: dict[str, int] = {}
+ character_count: int = 0
+
+ for i in range(0x40, 0xFF):
+ chunk: str = p.decode(bytes([i]))
+
+ if chunk:
+ character_range: str | None = unicode_range(chunk)
+
+ if character_range is None:
+ continue
+
+ if is_unicode_range_secondary(character_range) is False:
+ if character_range not in seen_ranges:
+ seen_ranges[character_range] = 0
+ seen_ranges[character_range] += 1
+ character_count += 1
+
+ return sorted(
+ [
+ character_range
+ for character_range in seen_ranges
+ if seen_ranges[character_range] / character_count >= 0.15
+ ]
+ )
+
+
+def unicode_range_languages(primary_range: str) -> list[str]:
+ """
+ Return inferred languages used with a unicode range.
+ """
+ languages: list[str] = []
+
+ for language, characters in FREQUENCIES.items():
+ for character in characters:
+ if unicode_range(character) == primary_range:
+ languages.append(language)
+ break
+
+ return languages
+
+
+@lru_cache()
+def encoding_languages(iana_name: str) -> list[str]:
+ """
+ Single-byte encoding language association. Some code page are heavily linked to particular language(s).
+ This function does the correspondence.
+ """
+ unicode_ranges: list[str] = encoding_unicode_range(iana_name)
+ primary_range: str | None = None
+
+ for specified_range in unicode_ranges:
+ if "Latin" not in specified_range:
+ primary_range = specified_range
+ break
+
+ if primary_range is None:
+ return ["Latin Based"]
+
+ return unicode_range_languages(primary_range)
+
+
+@lru_cache()
+def mb_encoding_languages(iana_name: str) -> list[str]:
+ """
+ Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
+ This function does the correspondence.
+ """
+ if (
+ iana_name.startswith("shift_")
+ or iana_name.startswith("iso2022_jp")
+ or iana_name.startswith("euc_j")
+ or iana_name == "cp932"
+ ):
+ return ["Japanese"]
+ if iana_name.startswith("gb") or iana_name in ZH_NAMES:
+ return ["Chinese"]
+ if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
+ return ["Korean"]
+
+ return []
+
+
+@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
+def get_target_features(language: str) -> tuple[bool, bool]:
+ """
+ Determine main aspects from a supported language if it contains accents and if is pure Latin.
+ """
+ target_have_accents: bool = False
+ target_pure_latin: bool = True
+
+ for character in FREQUENCIES[language]:
+ if not target_have_accents and is_accentuated(character):
+ target_have_accents = True
+ if target_pure_latin and is_latin(character) is False:
+ target_pure_latin = False
+
+ return target_have_accents, target_pure_latin
+
+
+def alphabet_languages(
+ characters: list[str], ignore_non_latin: bool = False
+) -> list[str]:
+ """
+ Return associated languages associated to given characters.
+ """
+ languages: list[tuple[str, float]] = []
+
+ source_have_accents = any(is_accentuated(character) for character in characters)
+
+ for language, language_characters in FREQUENCIES.items():
+ target_have_accents, target_pure_latin = get_target_features(language)
+
+ if ignore_non_latin and target_pure_latin is False:
+ continue
+
+ if target_have_accents is False and source_have_accents:
+ continue
+
+ character_count: int = len(language_characters)
+
+ character_match_count: int = len(
+ [c for c in language_characters if c in characters]
+ )
+
+ ratio: float = character_match_count / character_count
+
+ if ratio >= 0.2:
+ languages.append((language, ratio))
+
+ languages = sorted(languages, key=lambda x: x[1], reverse=True)
+
+ return [compatible_language[0] for compatible_language in languages]
+
+
+def characters_popularity_compare(
+ language: str, ordered_characters: list[str]
+) -> float:
+ """
+ Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language.
+ The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit).
+ Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.)
+ """
+ if language not in FREQUENCIES:
+ raise ValueError(f"{language} not available")
+
+ character_approved_count: int = 0
+ FREQUENCIES_language_set = set(FREQUENCIES[language])
+
+ ordered_characters_count: int = len(ordered_characters)
+ target_language_characters_count: int = len(FREQUENCIES[language])
+
+ large_alphabet: bool = target_language_characters_count > 26
+
+ for character, character_rank in zip(
+ ordered_characters, range(0, ordered_characters_count)
+ ):
+ if character not in FREQUENCIES_language_set:
+ continue
+
+ character_rank_in_language: int = FREQUENCIES[language].index(character)
+ expected_projection_ratio: float = (
+ target_language_characters_count / ordered_characters_count
+ )
+ character_rank_projection: int = int(character_rank * expected_projection_ratio)
+
+ if (
+ large_alphabet is False
+ and abs(character_rank_projection - character_rank_in_language) > 4
+ ):
+ continue
+
+ if (
+ large_alphabet is True
+ and abs(character_rank_projection - character_rank_in_language)
+ < target_language_characters_count / 3
+ ):
+ character_approved_count += 1
+ continue
+
+ characters_before_source: list[str] = FREQUENCIES[language][
+ 0:character_rank_in_language
+ ]
+ characters_after_source: list[str] = FREQUENCIES[language][
+ character_rank_in_language:
+ ]
+ characters_before: list[str] = ordered_characters[0:character_rank]
+ characters_after: list[str] = ordered_characters[character_rank:]
+
+ before_match_count: int = len(
+ set(characters_before) & set(characters_before_source)
+ )
+
+ after_match_count: int = len(
+ set(characters_after) & set(characters_after_source)
+ )
+
+ if len(characters_before_source) == 0 and before_match_count <= 4:
+ character_approved_count += 1
+ continue
+
+ if len(characters_after_source) == 0 and after_match_count <= 4:
+ character_approved_count += 1
+ continue
+
+ if (
+ before_match_count / len(characters_before_source) >= 0.4
+ or after_match_count / len(characters_after_source) >= 0.4
+ ):
+ character_approved_count += 1
+ continue
+
+ return character_approved_count / len(ordered_characters)
+
+
+def alpha_unicode_split(decoded_sequence: str) -> list[str]:
+ """
+ Given a decoded text sequence, return a list of str. Unicode range / alphabet separation.
+ Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list;
+ One containing the latin letters and the other hebrew.
+ """
+ layers: dict[str, str] = {}
+
+ for character in decoded_sequence:
+ if character.isalpha() is False:
+ continue
+
+ character_range: str | None = unicode_range(character)
+
+ if character_range is None:
+ continue
+
+ layer_target_range: str | None = None
+
+ for discovered_range in layers:
+ if (
+ is_suspiciously_successive_range(discovered_range, character_range)
+ is False
+ ):
+ layer_target_range = discovered_range
+ break
+
+ if layer_target_range is None:
+ layer_target_range = character_range
+
+ if layer_target_range not in layers:
+ layers[layer_target_range] = character.lower()
+ continue
+
+ layers[layer_target_range] += character.lower()
+
+ return list(layers.values())
+
+
+def merge_coherence_ratios(results: list[CoherenceMatches]) -> CoherenceMatches:
+ """
+ This function merge results previously given by the function coherence_ratio.
+ The return type is the same as coherence_ratio.
+ """
+ per_language_ratios: dict[str, list[float]] = {}
+ for result in results:
+ for sub_result in result:
+ language, ratio = sub_result
+ if language not in per_language_ratios:
+ per_language_ratios[language] = [ratio]
+ continue
+ per_language_ratios[language].append(ratio)
+
+ merge = [
+ (
+ language,
+ round(
+ sum(per_language_ratios[language]) / len(per_language_ratios[language]),
+ 4,
+ ),
+ )
+ for language in per_language_ratios
+ ]
+
+ return sorted(merge, key=lambda x: x[1], reverse=True)
+
+
+def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches:
+ """
+ We shall NOT return "English—" in CoherenceMatches because it is an alternative
+ of "English". This function only keeps the best match and remove the em-dash in it.
+ """
+ index_results: dict[str, list[float]] = dict()
+
+ for result in results:
+ language, ratio = result
+ no_em_name: str = language.replace("—", "")
+
+ if no_em_name not in index_results:
+ index_results[no_em_name] = []
+
+ index_results[no_em_name].append(ratio)
+
+ if any(len(index_results[e]) > 1 for e in index_results):
+ filtered_results: CoherenceMatches = []
+
+ for language in index_results:
+ filtered_results.append((language, max(index_results[language])))
+
+ return filtered_results
+
+ return results
+
+
+@lru_cache(maxsize=2048)
+def coherence_ratio(
+ decoded_sequence: str, threshold: float = 0.1, lg_inclusion: str | None = None
+) -> CoherenceMatches:
+ """
+ Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers.
+ A layer = Character extraction by alphabets/ranges.
+ """
+
+ results: list[tuple[str, float]] = []
+ ignore_non_latin: bool = False
+
+ sufficient_match_count: int = 0
+
+ lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else []
+ if "Latin Based" in lg_inclusion_list:
+ ignore_non_latin = True
+ lg_inclusion_list.remove("Latin Based")
+
+ for layer in alpha_unicode_split(decoded_sequence):
+ sequence_frequencies: TypeCounter[str] = Counter(layer)
+ most_common = sequence_frequencies.most_common()
+
+ character_count: int = sum(o for c, o in most_common)
+
+ if character_count <= TOO_SMALL_SEQUENCE:
+ continue
+
+ popular_character_ordered: list[str] = [c for c, o in most_common]
+
+ for language in lg_inclusion_list or alphabet_languages(
+ popular_character_ordered, ignore_non_latin
+ ):
+ ratio: float = characters_popularity_compare(
+ language, popular_character_ordered
+ )
+
+ if ratio < threshold:
+ continue
+ elif ratio >= 0.8:
+ sufficient_match_count += 1
+
+ results.append((language, round(ratio, 4)))
+
+ if sufficient_match_count >= 3:
+ break
+
+ return sorted(
+ filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True
+ )
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__init__.py
new file mode 100644
index 0000000..543a5a4
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__init__.py
@@ -0,0 +1,8 @@
+from __future__ import annotations
+
+from .__main__ import cli_detect, query_yes_no
+
+__all__ = (
+ "cli_detect",
+ "query_yes_no",
+)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__main__.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__main__.py
new file mode 100644
index 0000000..cb64156
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__main__.py
@@ -0,0 +1,381 @@
+from __future__ import annotations
+
+import argparse
+import sys
+import typing
+from json import dumps
+from os.path import abspath, basename, dirname, join, realpath
+from platform import python_version
+from unicodedata import unidata_version
+
+import charset_normalizer.md as md_module
+from charset_normalizer import from_fp
+from charset_normalizer.models import CliDetectionResult
+from charset_normalizer.version import __version__
+
+
+def query_yes_no(question: str, default: str = "yes") -> bool:
+ """Ask a yes/no question via input() and return their answer.
+
+ "question" is a string that is presented to the user.
+ "default" is the presumed answer if the user just hits .
+ It must be "yes" (the default), "no" or None (meaning
+ an answer is required of the user).
+
+ The "answer" return value is True for "yes" or False for "no".
+
+ Credit goes to (c) https://stackoverflow.com/questions/3041986/apt-command-line-interface-like-yes-no-input
+ """
+ valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
+ if default is None:
+ prompt = " [y/n] "
+ elif default == "yes":
+ prompt = " [Y/n] "
+ elif default == "no":
+ prompt = " [y/N] "
+ else:
+ raise ValueError("invalid default answer: '%s'" % default)
+
+ while True:
+ sys.stdout.write(question + prompt)
+ choice = input().lower()
+ if default is not None and choice == "":
+ return valid[default]
+ elif choice in valid:
+ return valid[choice]
+ else:
+ sys.stdout.write("Please respond with 'yes' or 'no' (or 'y' or 'n').\n")
+
+
+class FileType:
+ """Factory for creating file object types
+
+ Instances of FileType are typically passed as type= arguments to the
+ ArgumentParser add_argument() method.
+
+ Keyword Arguments:
+ - mode -- A string indicating how the file is to be opened. Accepts the
+ same values as the builtin open() function.
+ - bufsize -- The file's desired buffer size. Accepts the same values as
+ the builtin open() function.
+ - encoding -- The file's encoding. Accepts the same values as the
+ builtin open() function.
+ - errors -- A string indicating how encoding and decoding errors are to
+ be handled. Accepts the same value as the builtin open() function.
+
+ Backported from CPython 3.12
+ """
+
+ def __init__(
+ self,
+ mode: str = "r",
+ bufsize: int = -1,
+ encoding: str | None = None,
+ errors: str | None = None,
+ ):
+ self._mode = mode
+ self._bufsize = bufsize
+ self._encoding = encoding
+ self._errors = errors
+
+ def __call__(self, string: str) -> typing.IO: # type: ignore[type-arg]
+ # the special argument "-" means sys.std{in,out}
+ if string == "-":
+ if "r" in self._mode:
+ return sys.stdin.buffer if "b" in self._mode else sys.stdin
+ elif any(c in self._mode for c in "wax"):
+ return sys.stdout.buffer if "b" in self._mode else sys.stdout
+ else:
+ msg = f'argument "-" with mode {self._mode}'
+ raise ValueError(msg)
+
+ # all other arguments are used as file names
+ try:
+ return open(string, self._mode, self._bufsize, self._encoding, self._errors)
+ except OSError as e:
+ message = f"can't open '{string}': {e}"
+ raise argparse.ArgumentTypeError(message)
+
+ def __repr__(self) -> str:
+ args = self._mode, self._bufsize
+ kwargs = [("encoding", self._encoding), ("errors", self._errors)]
+ args_str = ", ".join(
+ [repr(arg) for arg in args if arg != -1]
+ + [f"{kw}={arg!r}" for kw, arg in kwargs if arg is not None]
+ )
+ return f"{type(self).__name__}({args_str})"
+
+
+def cli_detect(argv: list[str] | None = None) -> int:
+ """
+ CLI assistant using ARGV and ArgumentParser
+ :param argv:
+ :return: 0 if everything is fine, anything else equal trouble
+ """
+ parser = argparse.ArgumentParser(
+ description="The Real First Universal Charset Detector. "
+ "Discover originating encoding used on text file. "
+ "Normalize text to unicode."
+ )
+
+ parser.add_argument(
+ "files", type=FileType("rb"), nargs="+", help="File(s) to be analysed"
+ )
+ parser.add_argument(
+ "-v",
+ "--verbose",
+ action="store_true",
+ default=False,
+ dest="verbose",
+ help="Display complementary information about file if any. "
+ "Stdout will contain logs about the detection process.",
+ )
+ parser.add_argument(
+ "-a",
+ "--with-alternative",
+ action="store_true",
+ default=False,
+ dest="alternatives",
+ help="Output complementary possibilities if any. Top-level JSON WILL be a list.",
+ )
+ parser.add_argument(
+ "-n",
+ "--normalize",
+ action="store_true",
+ default=False,
+ dest="normalize",
+ help="Permit to normalize input file. If not set, program does not write anything.",
+ )
+ parser.add_argument(
+ "-m",
+ "--minimal",
+ action="store_true",
+ default=False,
+ dest="minimal",
+ help="Only output the charset detected to STDOUT. Disabling JSON output.",
+ )
+ parser.add_argument(
+ "-r",
+ "--replace",
+ action="store_true",
+ default=False,
+ dest="replace",
+ help="Replace file when trying to normalize it instead of creating a new one.",
+ )
+ parser.add_argument(
+ "-f",
+ "--force",
+ action="store_true",
+ default=False,
+ dest="force",
+ help="Replace file without asking if you are sure, use this flag with caution.",
+ )
+ parser.add_argument(
+ "-i",
+ "--no-preemptive",
+ action="store_true",
+ default=False,
+ dest="no_preemptive",
+ help="Disable looking at a charset declaration to hint the detector.",
+ )
+ parser.add_argument(
+ "-t",
+ "--threshold",
+ action="store",
+ default=0.2,
+ type=float,
+ dest="threshold",
+ help="Define a custom maximum amount of noise allowed in decoded content. 0. <= noise <= 1.",
+ )
+ parser.add_argument(
+ "--version",
+ action="version",
+ version="Charset-Normalizer {} - Python {} - Unicode {} - SpeedUp {}".format(
+ __version__,
+ python_version(),
+ unidata_version,
+ "OFF" if md_module.__file__.lower().endswith(".py") else "ON",
+ ),
+ help="Show version information and exit.",
+ )
+
+ args = parser.parse_args(argv)
+
+ if args.replace is True and args.normalize is False:
+ if args.files:
+ for my_file in args.files:
+ my_file.close()
+ print("Use --replace in addition of --normalize only.", file=sys.stderr)
+ return 1
+
+ if args.force is True and args.replace is False:
+ if args.files:
+ for my_file in args.files:
+ my_file.close()
+ print("Use --force in addition of --replace only.", file=sys.stderr)
+ return 1
+
+ if args.threshold < 0.0 or args.threshold > 1.0:
+ if args.files:
+ for my_file in args.files:
+ my_file.close()
+ print("--threshold VALUE should be between 0. AND 1.", file=sys.stderr)
+ return 1
+
+ x_ = []
+
+ for my_file in args.files:
+ matches = from_fp(
+ my_file,
+ threshold=args.threshold,
+ explain=args.verbose,
+ preemptive_behaviour=args.no_preemptive is False,
+ )
+
+ best_guess = matches.best()
+
+ if best_guess is None:
+ print(
+ 'Unable to identify originating encoding for "{}". {}'.format(
+ my_file.name,
+ (
+ "Maybe try increasing maximum amount of chaos."
+ if args.threshold < 1.0
+ else ""
+ ),
+ ),
+ file=sys.stderr,
+ )
+ x_.append(
+ CliDetectionResult(
+ abspath(my_file.name),
+ None,
+ [],
+ [],
+ "Unknown",
+ [],
+ False,
+ 1.0,
+ 0.0,
+ None,
+ True,
+ )
+ )
+ else:
+ x_.append(
+ CliDetectionResult(
+ abspath(my_file.name),
+ best_guess.encoding,
+ best_guess.encoding_aliases,
+ [
+ cp
+ for cp in best_guess.could_be_from_charset
+ if cp != best_guess.encoding
+ ],
+ best_guess.language,
+ best_guess.alphabets,
+ best_guess.bom,
+ best_guess.percent_chaos,
+ best_guess.percent_coherence,
+ None,
+ True,
+ )
+ )
+
+ if len(matches) > 1 and args.alternatives:
+ for el in matches:
+ if el != best_guess:
+ x_.append(
+ CliDetectionResult(
+ abspath(my_file.name),
+ el.encoding,
+ el.encoding_aliases,
+ [
+ cp
+ for cp in el.could_be_from_charset
+ if cp != el.encoding
+ ],
+ el.language,
+ el.alphabets,
+ el.bom,
+ el.percent_chaos,
+ el.percent_coherence,
+ None,
+ False,
+ )
+ )
+
+ if args.normalize is True:
+ if best_guess.encoding.startswith("utf") is True:
+ print(
+ '"{}" file does not need to be normalized, as it already came from unicode.'.format(
+ my_file.name
+ ),
+ file=sys.stderr,
+ )
+ if my_file.closed is False:
+ my_file.close()
+ continue
+
+ dir_path = dirname(realpath(my_file.name))
+ file_name = basename(realpath(my_file.name))
+
+ o_: list[str] = file_name.split(".")
+
+ if args.replace is False:
+ o_.insert(-1, best_guess.encoding)
+ if my_file.closed is False:
+ my_file.close()
+ elif (
+ args.force is False
+ and query_yes_no(
+ 'Are you sure to normalize "{}" by replacing it ?'.format(
+ my_file.name
+ ),
+ "no",
+ )
+ is False
+ ):
+ if my_file.closed is False:
+ my_file.close()
+ continue
+
+ try:
+ x_[0].unicode_path = join(dir_path, ".".join(o_))
+
+ with open(x_[0].unicode_path, "wb") as fp:
+ fp.write(best_guess.output())
+ except OSError as e:
+ print(str(e), file=sys.stderr)
+ if my_file.closed is False:
+ my_file.close()
+ return 2
+
+ if my_file.closed is False:
+ my_file.close()
+
+ if args.minimal is False:
+ print(
+ dumps(
+ [el.__dict__ for el in x_] if len(x_) > 1 else x_[0].__dict__,
+ ensure_ascii=True,
+ indent=4,
+ )
+ )
+ else:
+ for my_file in args.files:
+ print(
+ ", ".join(
+ [
+ el.encoding or "undefined"
+ for el in x_
+ if el.path == abspath(my_file.name)
+ ]
+ )
+ )
+
+ return 0
+
+
+if __name__ == "__main__":
+ cli_detect()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..9a56adb
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__main__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__main__.cpython-312.pyc
new file mode 100644
index 0000000..dcc4dbd
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/cli/__pycache__/__main__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/constant.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/constant.py
new file mode 100644
index 0000000..cc71a01
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/constant.py
@@ -0,0 +1,2015 @@
+from __future__ import annotations
+
+from codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE
+from encodings.aliases import aliases
+from re import IGNORECASE
+from re import compile as re_compile
+
+# Contain for each eligible encoding a list of/item bytes SIG/BOM
+ENCODING_MARKS: dict[str, bytes | list[bytes]] = {
+ "utf_8": BOM_UTF8,
+ "utf_7": [
+ b"\x2b\x2f\x76\x38",
+ b"\x2b\x2f\x76\x39",
+ b"\x2b\x2f\x76\x2b",
+ b"\x2b\x2f\x76\x2f",
+ b"\x2b\x2f\x76\x38\x2d",
+ ],
+ "gb18030": b"\x84\x31\x95\x33",
+ "utf_32": [BOM_UTF32_BE, BOM_UTF32_LE],
+ "utf_16": [BOM_UTF16_BE, BOM_UTF16_LE],
+}
+
+TOO_SMALL_SEQUENCE: int = 32
+TOO_BIG_SEQUENCE: int = int(10e6)
+
+UTF8_MAXIMAL_ALLOCATION: int = 1_112_064
+
+# Up-to-date Unicode ucd/15.0.0
+UNICODE_RANGES_COMBINED: dict[str, range] = {
+ "Control character": range(32),
+ "Basic Latin": range(32, 128),
+ "Latin-1 Supplement": range(128, 256),
+ "Latin Extended-A": range(256, 384),
+ "Latin Extended-B": range(384, 592),
+ "IPA Extensions": range(592, 688),
+ "Spacing Modifier Letters": range(688, 768),
+ "Combining Diacritical Marks": range(768, 880),
+ "Greek and Coptic": range(880, 1024),
+ "Cyrillic": range(1024, 1280),
+ "Cyrillic Supplement": range(1280, 1328),
+ "Armenian": range(1328, 1424),
+ "Hebrew": range(1424, 1536),
+ "Arabic": range(1536, 1792),
+ "Syriac": range(1792, 1872),
+ "Arabic Supplement": range(1872, 1920),
+ "Thaana": range(1920, 1984),
+ "NKo": range(1984, 2048),
+ "Samaritan": range(2048, 2112),
+ "Mandaic": range(2112, 2144),
+ "Syriac Supplement": range(2144, 2160),
+ "Arabic Extended-B": range(2160, 2208),
+ "Arabic Extended-A": range(2208, 2304),
+ "Devanagari": range(2304, 2432),
+ "Bengali": range(2432, 2560),
+ "Gurmukhi": range(2560, 2688),
+ "Gujarati": range(2688, 2816),
+ "Oriya": range(2816, 2944),
+ "Tamil": range(2944, 3072),
+ "Telugu": range(3072, 3200),
+ "Kannada": range(3200, 3328),
+ "Malayalam": range(3328, 3456),
+ "Sinhala": range(3456, 3584),
+ "Thai": range(3584, 3712),
+ "Lao": range(3712, 3840),
+ "Tibetan": range(3840, 4096),
+ "Myanmar": range(4096, 4256),
+ "Georgian": range(4256, 4352),
+ "Hangul Jamo": range(4352, 4608),
+ "Ethiopic": range(4608, 4992),
+ "Ethiopic Supplement": range(4992, 5024),
+ "Cherokee": range(5024, 5120),
+ "Unified Canadian Aboriginal Syllabics": range(5120, 5760),
+ "Ogham": range(5760, 5792),
+ "Runic": range(5792, 5888),
+ "Tagalog": range(5888, 5920),
+ "Hanunoo": range(5920, 5952),
+ "Buhid": range(5952, 5984),
+ "Tagbanwa": range(5984, 6016),
+ "Khmer": range(6016, 6144),
+ "Mongolian": range(6144, 6320),
+ "Unified Canadian Aboriginal Syllabics Extended": range(6320, 6400),
+ "Limbu": range(6400, 6480),
+ "Tai Le": range(6480, 6528),
+ "New Tai Lue": range(6528, 6624),
+ "Khmer Symbols": range(6624, 6656),
+ "Buginese": range(6656, 6688),
+ "Tai Tham": range(6688, 6832),
+ "Combining Diacritical Marks Extended": range(6832, 6912),
+ "Balinese": range(6912, 7040),
+ "Sundanese": range(7040, 7104),
+ "Batak": range(7104, 7168),
+ "Lepcha": range(7168, 7248),
+ "Ol Chiki": range(7248, 7296),
+ "Cyrillic Extended-C": range(7296, 7312),
+ "Georgian Extended": range(7312, 7360),
+ "Sundanese Supplement": range(7360, 7376),
+ "Vedic Extensions": range(7376, 7424),
+ "Phonetic Extensions": range(7424, 7552),
+ "Phonetic Extensions Supplement": range(7552, 7616),
+ "Combining Diacritical Marks Supplement": range(7616, 7680),
+ "Latin Extended Additional": range(7680, 7936),
+ "Greek Extended": range(7936, 8192),
+ "General Punctuation": range(8192, 8304),
+ "Superscripts and Subscripts": range(8304, 8352),
+ "Currency Symbols": range(8352, 8400),
+ "Combining Diacritical Marks for Symbols": range(8400, 8448),
+ "Letterlike Symbols": range(8448, 8528),
+ "Number Forms": range(8528, 8592),
+ "Arrows": range(8592, 8704),
+ "Mathematical Operators": range(8704, 8960),
+ "Miscellaneous Technical": range(8960, 9216),
+ "Control Pictures": range(9216, 9280),
+ "Optical Character Recognition": range(9280, 9312),
+ "Enclosed Alphanumerics": range(9312, 9472),
+ "Box Drawing": range(9472, 9600),
+ "Block Elements": range(9600, 9632),
+ "Geometric Shapes": range(9632, 9728),
+ "Miscellaneous Symbols": range(9728, 9984),
+ "Dingbats": range(9984, 10176),
+ "Miscellaneous Mathematical Symbols-A": range(10176, 10224),
+ "Supplemental Arrows-A": range(10224, 10240),
+ "Braille Patterns": range(10240, 10496),
+ "Supplemental Arrows-B": range(10496, 10624),
+ "Miscellaneous Mathematical Symbols-B": range(10624, 10752),
+ "Supplemental Mathematical Operators": range(10752, 11008),
+ "Miscellaneous Symbols and Arrows": range(11008, 11264),
+ "Glagolitic": range(11264, 11360),
+ "Latin Extended-C": range(11360, 11392),
+ "Coptic": range(11392, 11520),
+ "Georgian Supplement": range(11520, 11568),
+ "Tifinagh": range(11568, 11648),
+ "Ethiopic Extended": range(11648, 11744),
+ "Cyrillic Extended-A": range(11744, 11776),
+ "Supplemental Punctuation": range(11776, 11904),
+ "CJK Radicals Supplement": range(11904, 12032),
+ "Kangxi Radicals": range(12032, 12256),
+ "Ideographic Description Characters": range(12272, 12288),
+ "CJK Symbols and Punctuation": range(12288, 12352),
+ "Hiragana": range(12352, 12448),
+ "Katakana": range(12448, 12544),
+ "Bopomofo": range(12544, 12592),
+ "Hangul Compatibility Jamo": range(12592, 12688),
+ "Kanbun": range(12688, 12704),
+ "Bopomofo Extended": range(12704, 12736),
+ "CJK Strokes": range(12736, 12784),
+ "Katakana Phonetic Extensions": range(12784, 12800),
+ "Enclosed CJK Letters and Months": range(12800, 13056),
+ "CJK Compatibility": range(13056, 13312),
+ "CJK Unified Ideographs Extension A": range(13312, 19904),
+ "Yijing Hexagram Symbols": range(19904, 19968),
+ "CJK Unified Ideographs": range(19968, 40960),
+ "Yi Syllables": range(40960, 42128),
+ "Yi Radicals": range(42128, 42192),
+ "Lisu": range(42192, 42240),
+ "Vai": range(42240, 42560),
+ "Cyrillic Extended-B": range(42560, 42656),
+ "Bamum": range(42656, 42752),
+ "Modifier Tone Letters": range(42752, 42784),
+ "Latin Extended-D": range(42784, 43008),
+ "Syloti Nagri": range(43008, 43056),
+ "Common Indic Number Forms": range(43056, 43072),
+ "Phags-pa": range(43072, 43136),
+ "Saurashtra": range(43136, 43232),
+ "Devanagari Extended": range(43232, 43264),
+ "Kayah Li": range(43264, 43312),
+ "Rejang": range(43312, 43360),
+ "Hangul Jamo Extended-A": range(43360, 43392),
+ "Javanese": range(43392, 43488),
+ "Myanmar Extended-B": range(43488, 43520),
+ "Cham": range(43520, 43616),
+ "Myanmar Extended-A": range(43616, 43648),
+ "Tai Viet": range(43648, 43744),
+ "Meetei Mayek Extensions": range(43744, 43776),
+ "Ethiopic Extended-A": range(43776, 43824),
+ "Latin Extended-E": range(43824, 43888),
+ "Cherokee Supplement": range(43888, 43968),
+ "Meetei Mayek": range(43968, 44032),
+ "Hangul Syllables": range(44032, 55216),
+ "Hangul Jamo Extended-B": range(55216, 55296),
+ "High Surrogates": range(55296, 56192),
+ "High Private Use Surrogates": range(56192, 56320),
+ "Low Surrogates": range(56320, 57344),
+ "Private Use Area": range(57344, 63744),
+ "CJK Compatibility Ideographs": range(63744, 64256),
+ "Alphabetic Presentation Forms": range(64256, 64336),
+ "Arabic Presentation Forms-A": range(64336, 65024),
+ "Variation Selectors": range(65024, 65040),
+ "Vertical Forms": range(65040, 65056),
+ "Combining Half Marks": range(65056, 65072),
+ "CJK Compatibility Forms": range(65072, 65104),
+ "Small Form Variants": range(65104, 65136),
+ "Arabic Presentation Forms-B": range(65136, 65280),
+ "Halfwidth and Fullwidth Forms": range(65280, 65520),
+ "Specials": range(65520, 65536),
+ "Linear B Syllabary": range(65536, 65664),
+ "Linear B Ideograms": range(65664, 65792),
+ "Aegean Numbers": range(65792, 65856),
+ "Ancient Greek Numbers": range(65856, 65936),
+ "Ancient Symbols": range(65936, 66000),
+ "Phaistos Disc": range(66000, 66048),
+ "Lycian": range(66176, 66208),
+ "Carian": range(66208, 66272),
+ "Coptic Epact Numbers": range(66272, 66304),
+ "Old Italic": range(66304, 66352),
+ "Gothic": range(66352, 66384),
+ "Old Permic": range(66384, 66432),
+ "Ugaritic": range(66432, 66464),
+ "Old Persian": range(66464, 66528),
+ "Deseret": range(66560, 66640),
+ "Shavian": range(66640, 66688),
+ "Osmanya": range(66688, 66736),
+ "Osage": range(66736, 66816),
+ "Elbasan": range(66816, 66864),
+ "Caucasian Albanian": range(66864, 66928),
+ "Vithkuqi": range(66928, 67008),
+ "Linear A": range(67072, 67456),
+ "Latin Extended-F": range(67456, 67520),
+ "Cypriot Syllabary": range(67584, 67648),
+ "Imperial Aramaic": range(67648, 67680),
+ "Palmyrene": range(67680, 67712),
+ "Nabataean": range(67712, 67760),
+ "Hatran": range(67808, 67840),
+ "Phoenician": range(67840, 67872),
+ "Lydian": range(67872, 67904),
+ "Meroitic Hieroglyphs": range(67968, 68000),
+ "Meroitic Cursive": range(68000, 68096),
+ "Kharoshthi": range(68096, 68192),
+ "Old South Arabian": range(68192, 68224),
+ "Old North Arabian": range(68224, 68256),
+ "Manichaean": range(68288, 68352),
+ "Avestan": range(68352, 68416),
+ "Inscriptional Parthian": range(68416, 68448),
+ "Inscriptional Pahlavi": range(68448, 68480),
+ "Psalter Pahlavi": range(68480, 68528),
+ "Old Turkic": range(68608, 68688),
+ "Old Hungarian": range(68736, 68864),
+ "Hanifi Rohingya": range(68864, 68928),
+ "Rumi Numeral Symbols": range(69216, 69248),
+ "Yezidi": range(69248, 69312),
+ "Arabic Extended-C": range(69312, 69376),
+ "Old Sogdian": range(69376, 69424),
+ "Sogdian": range(69424, 69488),
+ "Old Uyghur": range(69488, 69552),
+ "Chorasmian": range(69552, 69600),
+ "Elymaic": range(69600, 69632),
+ "Brahmi": range(69632, 69760),
+ "Kaithi": range(69760, 69840),
+ "Sora Sompeng": range(69840, 69888),
+ "Chakma": range(69888, 69968),
+ "Mahajani": range(69968, 70016),
+ "Sharada": range(70016, 70112),
+ "Sinhala Archaic Numbers": range(70112, 70144),
+ "Khojki": range(70144, 70224),
+ "Multani": range(70272, 70320),
+ "Khudawadi": range(70320, 70400),
+ "Grantha": range(70400, 70528),
+ "Newa": range(70656, 70784),
+ "Tirhuta": range(70784, 70880),
+ "Siddham": range(71040, 71168),
+ "Modi": range(71168, 71264),
+ "Mongolian Supplement": range(71264, 71296),
+ "Takri": range(71296, 71376),
+ "Ahom": range(71424, 71504),
+ "Dogra": range(71680, 71760),
+ "Warang Citi": range(71840, 71936),
+ "Dives Akuru": range(71936, 72032),
+ "Nandinagari": range(72096, 72192),
+ "Zanabazar Square": range(72192, 72272),
+ "Soyombo": range(72272, 72368),
+ "Unified Canadian Aboriginal Syllabics Extended-A": range(72368, 72384),
+ "Pau Cin Hau": range(72384, 72448),
+ "Devanagari Extended-A": range(72448, 72544),
+ "Bhaiksuki": range(72704, 72816),
+ "Marchen": range(72816, 72896),
+ "Masaram Gondi": range(72960, 73056),
+ "Gunjala Gondi": range(73056, 73136),
+ "Makasar": range(73440, 73472),
+ "Kawi": range(73472, 73568),
+ "Lisu Supplement": range(73648, 73664),
+ "Tamil Supplement": range(73664, 73728),
+ "Cuneiform": range(73728, 74752),
+ "Cuneiform Numbers and Punctuation": range(74752, 74880),
+ "Early Dynastic Cuneiform": range(74880, 75088),
+ "Cypro-Minoan": range(77712, 77824),
+ "Egyptian Hieroglyphs": range(77824, 78896),
+ "Egyptian Hieroglyph Format Controls": range(78896, 78944),
+ "Anatolian Hieroglyphs": range(82944, 83584),
+ "Bamum Supplement": range(92160, 92736),
+ "Mro": range(92736, 92784),
+ "Tangsa": range(92784, 92880),
+ "Bassa Vah": range(92880, 92928),
+ "Pahawh Hmong": range(92928, 93072),
+ "Medefaidrin": range(93760, 93856),
+ "Miao": range(93952, 94112),
+ "Ideographic Symbols and Punctuation": range(94176, 94208),
+ "Tangut": range(94208, 100352),
+ "Tangut Components": range(100352, 101120),
+ "Khitan Small Script": range(101120, 101632),
+ "Tangut Supplement": range(101632, 101760),
+ "Kana Extended-B": range(110576, 110592),
+ "Kana Supplement": range(110592, 110848),
+ "Kana Extended-A": range(110848, 110896),
+ "Small Kana Extension": range(110896, 110960),
+ "Nushu": range(110960, 111360),
+ "Duployan": range(113664, 113824),
+ "Shorthand Format Controls": range(113824, 113840),
+ "Znamenny Musical Notation": range(118528, 118736),
+ "Byzantine Musical Symbols": range(118784, 119040),
+ "Musical Symbols": range(119040, 119296),
+ "Ancient Greek Musical Notation": range(119296, 119376),
+ "Kaktovik Numerals": range(119488, 119520),
+ "Mayan Numerals": range(119520, 119552),
+ "Tai Xuan Jing Symbols": range(119552, 119648),
+ "Counting Rod Numerals": range(119648, 119680),
+ "Mathematical Alphanumeric Symbols": range(119808, 120832),
+ "Sutton SignWriting": range(120832, 121520),
+ "Latin Extended-G": range(122624, 122880),
+ "Glagolitic Supplement": range(122880, 122928),
+ "Cyrillic Extended-D": range(122928, 123024),
+ "Nyiakeng Puachue Hmong": range(123136, 123216),
+ "Toto": range(123536, 123584),
+ "Wancho": range(123584, 123648),
+ "Nag Mundari": range(124112, 124160),
+ "Ethiopic Extended-B": range(124896, 124928),
+ "Mende Kikakui": range(124928, 125152),
+ "Adlam": range(125184, 125280),
+ "Indic Siyaq Numbers": range(126064, 126144),
+ "Ottoman Siyaq Numbers": range(126208, 126288),
+ "Arabic Mathematical Alphabetic Symbols": range(126464, 126720),
+ "Mahjong Tiles": range(126976, 127024),
+ "Domino Tiles": range(127024, 127136),
+ "Playing Cards": range(127136, 127232),
+ "Enclosed Alphanumeric Supplement": range(127232, 127488),
+ "Enclosed Ideographic Supplement": range(127488, 127744),
+ "Miscellaneous Symbols and Pictographs": range(127744, 128512),
+ "Emoticons range(Emoji)": range(128512, 128592),
+ "Ornamental Dingbats": range(128592, 128640),
+ "Transport and Map Symbols": range(128640, 128768),
+ "Alchemical Symbols": range(128768, 128896),
+ "Geometric Shapes Extended": range(128896, 129024),
+ "Supplemental Arrows-C": range(129024, 129280),
+ "Supplemental Symbols and Pictographs": range(129280, 129536),
+ "Chess Symbols": range(129536, 129648),
+ "Symbols and Pictographs Extended-A": range(129648, 129792),
+ "Symbols for Legacy Computing": range(129792, 130048),
+ "CJK Unified Ideographs Extension B": range(131072, 173792),
+ "CJK Unified Ideographs Extension C": range(173824, 177984),
+ "CJK Unified Ideographs Extension D": range(177984, 178208),
+ "CJK Unified Ideographs Extension E": range(178208, 183984),
+ "CJK Unified Ideographs Extension F": range(183984, 191472),
+ "CJK Compatibility Ideographs Supplement": range(194560, 195104),
+ "CJK Unified Ideographs Extension G": range(196608, 201552),
+ "CJK Unified Ideographs Extension H": range(201552, 205744),
+ "Tags": range(917504, 917632),
+ "Variation Selectors Supplement": range(917760, 918000),
+ "Supplementary Private Use Area-A": range(983040, 1048576),
+ "Supplementary Private Use Area-B": range(1048576, 1114112),
+}
+
+
+UNICODE_SECONDARY_RANGE_KEYWORD: list[str] = [
+ "Supplement",
+ "Extended",
+ "Extensions",
+ "Modifier",
+ "Marks",
+ "Punctuation",
+ "Symbols",
+ "Forms",
+ "Operators",
+ "Miscellaneous",
+ "Drawing",
+ "Block",
+ "Shapes",
+ "Supplemental",
+ "Tags",
+]
+
+RE_POSSIBLE_ENCODING_INDICATION = re_compile(
+ r"(?:(?:encoding)|(?:charset)|(?:coding))(?:[\:= ]{1,10})(?:[\"\']?)([a-zA-Z0-9\-_]+)(?:[\"\']?)",
+ IGNORECASE,
+)
+
+IANA_NO_ALIASES = [
+ "cp720",
+ "cp737",
+ "cp856",
+ "cp874",
+ "cp875",
+ "cp1006",
+ "koi8_r",
+ "koi8_t",
+ "koi8_u",
+]
+
+IANA_SUPPORTED: list[str] = sorted(
+ filter(
+ lambda x: x.endswith("_codec") is False
+ and x not in {"rot_13", "tactis", "mbcs"},
+ list(set(aliases.values())) + IANA_NO_ALIASES,
+ )
+)
+
+IANA_SUPPORTED_COUNT: int = len(IANA_SUPPORTED)
+
+# pre-computed code page that are similar using the function cp_similarity.
+IANA_SUPPORTED_SIMILAR: dict[str, list[str]] = {
+ "cp037": ["cp1026", "cp1140", "cp273", "cp500"],
+ "cp1026": ["cp037", "cp1140", "cp273", "cp500"],
+ "cp1125": ["cp866"],
+ "cp1140": ["cp037", "cp1026", "cp273", "cp500"],
+ "cp1250": ["iso8859_2"],
+ "cp1251": ["kz1048", "ptcp154"],
+ "cp1252": ["iso8859_15", "iso8859_9", "latin_1"],
+ "cp1253": ["iso8859_7"],
+ "cp1254": ["iso8859_15", "iso8859_9", "latin_1"],
+ "cp1257": ["iso8859_13"],
+ "cp273": ["cp037", "cp1026", "cp1140", "cp500"],
+ "cp437": ["cp850", "cp858", "cp860", "cp861", "cp862", "cp863", "cp865"],
+ "cp500": ["cp037", "cp1026", "cp1140", "cp273"],
+ "cp850": ["cp437", "cp857", "cp858", "cp865"],
+ "cp857": ["cp850", "cp858", "cp865"],
+ "cp858": ["cp437", "cp850", "cp857", "cp865"],
+ "cp860": ["cp437", "cp861", "cp862", "cp863", "cp865"],
+ "cp861": ["cp437", "cp860", "cp862", "cp863", "cp865"],
+ "cp862": ["cp437", "cp860", "cp861", "cp863", "cp865"],
+ "cp863": ["cp437", "cp860", "cp861", "cp862", "cp865"],
+ "cp865": ["cp437", "cp850", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863"],
+ "cp866": ["cp1125"],
+ "iso8859_10": ["iso8859_14", "iso8859_15", "iso8859_4", "iso8859_9", "latin_1"],
+ "iso8859_11": ["tis_620"],
+ "iso8859_13": ["cp1257"],
+ "iso8859_14": [
+ "iso8859_10",
+ "iso8859_15",
+ "iso8859_16",
+ "iso8859_3",
+ "iso8859_9",
+ "latin_1",
+ ],
+ "iso8859_15": [
+ "cp1252",
+ "cp1254",
+ "iso8859_10",
+ "iso8859_14",
+ "iso8859_16",
+ "iso8859_3",
+ "iso8859_9",
+ "latin_1",
+ ],
+ "iso8859_16": [
+ "iso8859_14",
+ "iso8859_15",
+ "iso8859_2",
+ "iso8859_3",
+ "iso8859_9",
+ "latin_1",
+ ],
+ "iso8859_2": ["cp1250", "iso8859_16", "iso8859_4"],
+ "iso8859_3": ["iso8859_14", "iso8859_15", "iso8859_16", "iso8859_9", "latin_1"],
+ "iso8859_4": ["iso8859_10", "iso8859_2", "iso8859_9", "latin_1"],
+ "iso8859_7": ["cp1253"],
+ "iso8859_9": [
+ "cp1252",
+ "cp1254",
+ "cp1258",
+ "iso8859_10",
+ "iso8859_14",
+ "iso8859_15",
+ "iso8859_16",
+ "iso8859_3",
+ "iso8859_4",
+ "latin_1",
+ ],
+ "kz1048": ["cp1251", "ptcp154"],
+ "latin_1": [
+ "cp1252",
+ "cp1254",
+ "cp1258",
+ "iso8859_10",
+ "iso8859_14",
+ "iso8859_15",
+ "iso8859_16",
+ "iso8859_3",
+ "iso8859_4",
+ "iso8859_9",
+ ],
+ "mac_iceland": ["mac_roman", "mac_turkish"],
+ "mac_roman": ["mac_iceland", "mac_turkish"],
+ "mac_turkish": ["mac_iceland", "mac_roman"],
+ "ptcp154": ["cp1251", "kz1048"],
+ "tis_620": ["iso8859_11"],
+}
+
+
+CHARDET_CORRESPONDENCE: dict[str, str] = {
+ "iso2022_kr": "ISO-2022-KR",
+ "iso2022_jp": "ISO-2022-JP",
+ "euc_kr": "EUC-KR",
+ "tis_620": "TIS-620",
+ "utf_32": "UTF-32",
+ "euc_jp": "EUC-JP",
+ "koi8_r": "KOI8-R",
+ "iso8859_1": "ISO-8859-1",
+ "iso8859_2": "ISO-8859-2",
+ "iso8859_5": "ISO-8859-5",
+ "iso8859_6": "ISO-8859-6",
+ "iso8859_7": "ISO-8859-7",
+ "iso8859_8": "ISO-8859-8",
+ "utf_16": "UTF-16",
+ "cp855": "IBM855",
+ "mac_cyrillic": "MacCyrillic",
+ "gb2312": "GB2312",
+ "gb18030": "GB18030",
+ "cp932": "CP932",
+ "cp866": "IBM866",
+ "utf_8": "utf-8",
+ "utf_8_sig": "UTF-8-SIG",
+ "shift_jis": "SHIFT_JIS",
+ "big5": "Big5",
+ "cp1250": "windows-1250",
+ "cp1251": "windows-1251",
+ "cp1252": "Windows-1252",
+ "cp1253": "windows-1253",
+ "cp1255": "windows-1255",
+ "cp1256": "windows-1256",
+ "cp1254": "Windows-1254",
+ "cp949": "CP949",
+}
+
+
+COMMON_SAFE_ASCII_CHARACTERS: set[str] = {
+ "<",
+ ">",
+ "=",
+ ":",
+ "/",
+ "&",
+ ";",
+ "{",
+ "}",
+ "[",
+ "]",
+ ",",
+ "|",
+ '"',
+ "-",
+ "(",
+ ")",
+}
+
+# Sample character sets — replace with full lists if needed
+COMMON_CHINESE_CHARACTERS = "的一是在不了有和人这中大为上个国我以要他时来用们生到作地于出就分对成会可主发年动同工也能下过子说产种面而方后多定行学法所民得经十三之进着等部度家电力里如水化高自二理起小物现实加量都两体制机当使点从业本去把性好应开它合还因由其些然前外天政四日那社义事平形相全表间样与关各重新线内数正心反你明看原又么利比或但质气第向道命此变条只没结解问意建月公无系军很情者最立代想已通并提直题党程展五果料象员革位入常文总次品式活设及管特件长求老头基资边流路级少图山统接知较将组见计别她手角期根论运农指几九区强放决西被干做必战先回则任取据处队南给色光门即保治北造百规热领七海口东导器压志世金增争济阶油思术极交受联什认六共权收证改清己美再采转更单风切打白教速花带安场身车例真务具万每目至达走积示议声报斗完类八离华名确才科张信马节话米整空元况今集温传土许步群广石记需段研界拉林律叫且究观越织装影算低持音众书布复容儿须际商非验连断深难近矿千周委素技备半办青省列习响约支般史感劳便团往酸历市克何除消构府太准精值号率族维划选标写存候毛亲快效斯院查江型眼王按格养易置派层片始却专状育厂京识适属圆包火住调满县局照参红细引听该铁价严龙飞"
+
+COMMON_JAPANESE_CHARACTERS = "日一国年大十二本中長出三時行見月分後前生五間上東四今金九入学高円子外八六下来気小七山話女北午百書先名川千水半男西電校語土木聞食車何南万毎白天母火右読友左休父雨"
+
+COMMON_KOREAN_CHARACTERS = "一二三四五六七八九十百千萬上下左右中人女子大小山川日月火水木金土父母天地國名年時文校學生"
+
+# Combine all into a set
+COMMON_CJK_CHARACTERS = set(
+ "".join(
+ [
+ COMMON_CHINESE_CHARACTERS,
+ COMMON_JAPANESE_CHARACTERS,
+ COMMON_KOREAN_CHARACTERS,
+ ]
+ )
+)
+
+KO_NAMES: set[str] = {"johab", "cp949", "euc_kr"}
+ZH_NAMES: set[str] = {"big5", "cp950", "big5hkscs", "hz"}
+
+# Logging LEVEL below DEBUG
+TRACE: int = 5
+
+
+# Language label that contain the em dash "—"
+# character are to be considered alternative seq to origin
+FREQUENCIES: dict[str, list[str]] = {
+ "English": [
+ "e",
+ "a",
+ "t",
+ "i",
+ "o",
+ "n",
+ "s",
+ "r",
+ "h",
+ "l",
+ "d",
+ "c",
+ "u",
+ "m",
+ "f",
+ "p",
+ "g",
+ "w",
+ "y",
+ "b",
+ "v",
+ "k",
+ "x",
+ "j",
+ "z",
+ "q",
+ ],
+ "English—": [
+ "e",
+ "a",
+ "t",
+ "i",
+ "o",
+ "n",
+ "s",
+ "r",
+ "h",
+ "l",
+ "d",
+ "c",
+ "m",
+ "u",
+ "f",
+ "p",
+ "g",
+ "w",
+ "b",
+ "y",
+ "v",
+ "k",
+ "j",
+ "x",
+ "z",
+ "q",
+ ],
+ "German": [
+ "e",
+ "n",
+ "i",
+ "r",
+ "s",
+ "t",
+ "a",
+ "d",
+ "h",
+ "u",
+ "l",
+ "g",
+ "o",
+ "c",
+ "m",
+ "b",
+ "f",
+ "k",
+ "w",
+ "z",
+ "p",
+ "v",
+ "ü",
+ "ä",
+ "ö",
+ "j",
+ ],
+ "French": [
+ "e",
+ "a",
+ "s",
+ "n",
+ "i",
+ "t",
+ "r",
+ "l",
+ "u",
+ "o",
+ "d",
+ "c",
+ "p",
+ "m",
+ "é",
+ "v",
+ "g",
+ "f",
+ "b",
+ "h",
+ "q",
+ "à",
+ "x",
+ "è",
+ "y",
+ "j",
+ ],
+ "Dutch": [
+ "e",
+ "n",
+ "a",
+ "i",
+ "r",
+ "t",
+ "o",
+ "d",
+ "s",
+ "l",
+ "g",
+ "h",
+ "v",
+ "m",
+ "u",
+ "k",
+ "c",
+ "p",
+ "b",
+ "w",
+ "j",
+ "z",
+ "f",
+ "y",
+ "x",
+ "ë",
+ ],
+ "Italian": [
+ "e",
+ "i",
+ "a",
+ "o",
+ "n",
+ "l",
+ "t",
+ "r",
+ "s",
+ "c",
+ "d",
+ "u",
+ "p",
+ "m",
+ "g",
+ "v",
+ "f",
+ "b",
+ "z",
+ "h",
+ "q",
+ "è",
+ "à",
+ "k",
+ "y",
+ "ò",
+ ],
+ "Polish": [
+ "a",
+ "i",
+ "o",
+ "e",
+ "n",
+ "r",
+ "z",
+ "w",
+ "s",
+ "c",
+ "t",
+ "k",
+ "y",
+ "d",
+ "p",
+ "m",
+ "u",
+ "l",
+ "j",
+ "ł",
+ "g",
+ "b",
+ "h",
+ "ą",
+ "ę",
+ "ó",
+ ],
+ "Spanish": [
+ "e",
+ "a",
+ "o",
+ "n",
+ "s",
+ "r",
+ "i",
+ "l",
+ "d",
+ "t",
+ "c",
+ "u",
+ "m",
+ "p",
+ "b",
+ "g",
+ "v",
+ "f",
+ "y",
+ "ó",
+ "h",
+ "q",
+ "í",
+ "j",
+ "z",
+ "á",
+ ],
+ "Russian": [
+ "о",
+ "а",
+ "е",
+ "и",
+ "н",
+ "с",
+ "т",
+ "р",
+ "в",
+ "л",
+ "к",
+ "м",
+ "д",
+ "п",
+ "у",
+ "г",
+ "я",
+ "ы",
+ "з",
+ "б",
+ "й",
+ "ь",
+ "ч",
+ "х",
+ "ж",
+ "ц",
+ ],
+ # Jap-Kanji
+ "Japanese": [
+ "人",
+ "一",
+ "大",
+ "亅",
+ "丁",
+ "丨",
+ "竹",
+ "笑",
+ "口",
+ "日",
+ "今",
+ "二",
+ "彳",
+ "行",
+ "十",
+ "土",
+ "丶",
+ "寸",
+ "寺",
+ "時",
+ "乙",
+ "丿",
+ "乂",
+ "气",
+ "気",
+ "冂",
+ "巾",
+ "亠",
+ "市",
+ "目",
+ "儿",
+ "見",
+ "八",
+ "小",
+ "凵",
+ "県",
+ "月",
+ "彐",
+ "門",
+ "間",
+ "木",
+ "東",
+ "山",
+ "出",
+ "本",
+ "中",
+ "刀",
+ "分",
+ "耳",
+ "又",
+ "取",
+ "最",
+ "言",
+ "田",
+ "心",
+ "思",
+ "刂",
+ "前",
+ "京",
+ "尹",
+ "事",
+ "生",
+ "厶",
+ "云",
+ "会",
+ "未",
+ "来",
+ "白",
+ "冫",
+ "楽",
+ "灬",
+ "馬",
+ "尸",
+ "尺",
+ "駅",
+ "明",
+ "耂",
+ "者",
+ "了",
+ "阝",
+ "都",
+ "高",
+ "卜",
+ "占",
+ "厂",
+ "广",
+ "店",
+ "子",
+ "申",
+ "奄",
+ "亻",
+ "俺",
+ "上",
+ "方",
+ "冖",
+ "学",
+ "衣",
+ "艮",
+ "食",
+ "自",
+ ],
+ # Jap-Katakana
+ "Japanese—": [
+ "ー",
+ "ン",
+ "ス",
+ "・",
+ "ル",
+ "ト",
+ "リ",
+ "イ",
+ "ア",
+ "ラ",
+ "ッ",
+ "ク",
+ "ド",
+ "シ",
+ "レ",
+ "ジ",
+ "タ",
+ "フ",
+ "ロ",
+ "カ",
+ "テ",
+ "マ",
+ "ィ",
+ "グ",
+ "バ",
+ "ム",
+ "プ",
+ "オ",
+ "コ",
+ "デ",
+ "ニ",
+ "ウ",
+ "メ",
+ "サ",
+ "ビ",
+ "ナ",
+ "ブ",
+ "ャ",
+ "エ",
+ "ュ",
+ "チ",
+ "キ",
+ "ズ",
+ "ダ",
+ "パ",
+ "ミ",
+ "ェ",
+ "ョ",
+ "ハ",
+ "セ",
+ "ベ",
+ "ガ",
+ "モ",
+ "ツ",
+ "ネ",
+ "ボ",
+ "ソ",
+ "ノ",
+ "ァ",
+ "ヴ",
+ "ワ",
+ "ポ",
+ "ペ",
+ "ピ",
+ "ケ",
+ "ゴ",
+ "ギ",
+ "ザ",
+ "ホ",
+ "ゲ",
+ "ォ",
+ "ヤ",
+ "ヒ",
+ "ユ",
+ "ヨ",
+ "ヘ",
+ "ゼ",
+ "ヌ",
+ "ゥ",
+ "ゾ",
+ "ヶ",
+ "ヂ",
+ "ヲ",
+ "ヅ",
+ "ヵ",
+ "ヱ",
+ "ヰ",
+ "ヮ",
+ "ヽ",
+ "゠",
+ "ヾ",
+ "ヷ",
+ "ヿ",
+ "ヸ",
+ "ヹ",
+ "ヺ",
+ ],
+ # Jap-Hiragana
+ "Japanese——": [
+ "の",
+ "に",
+ "る",
+ "た",
+ "と",
+ "は",
+ "し",
+ "い",
+ "を",
+ "で",
+ "て",
+ "が",
+ "な",
+ "れ",
+ "か",
+ "ら",
+ "さ",
+ "っ",
+ "り",
+ "す",
+ "あ",
+ "も",
+ "こ",
+ "ま",
+ "う",
+ "く",
+ "よ",
+ "き",
+ "ん",
+ "め",
+ "お",
+ "け",
+ "そ",
+ "つ",
+ "だ",
+ "や",
+ "え",
+ "ど",
+ "わ",
+ "ち",
+ "み",
+ "せ",
+ "じ",
+ "ば",
+ "へ",
+ "び",
+ "ず",
+ "ろ",
+ "ほ",
+ "げ",
+ "む",
+ "べ",
+ "ひ",
+ "ょ",
+ "ゆ",
+ "ぶ",
+ "ご",
+ "ゃ",
+ "ね",
+ "ふ",
+ "ぐ",
+ "ぎ",
+ "ぼ",
+ "ゅ",
+ "づ",
+ "ざ",
+ "ぞ",
+ "ぬ",
+ "ぜ",
+ "ぱ",
+ "ぽ",
+ "ぷ",
+ "ぴ",
+ "ぃ",
+ "ぁ",
+ "ぇ",
+ "ぺ",
+ "ゞ",
+ "ぢ",
+ "ぉ",
+ "ぅ",
+ "ゐ",
+ "ゝ",
+ "ゑ",
+ "゛",
+ "゜",
+ "ゎ",
+ "ゔ",
+ "゚",
+ "ゟ",
+ "゙",
+ "ゕ",
+ "ゖ",
+ ],
+ "Portuguese": [
+ "a",
+ "e",
+ "o",
+ "s",
+ "i",
+ "r",
+ "d",
+ "n",
+ "t",
+ "m",
+ "u",
+ "c",
+ "l",
+ "p",
+ "g",
+ "v",
+ "b",
+ "f",
+ "h",
+ "ã",
+ "q",
+ "é",
+ "ç",
+ "á",
+ "z",
+ "í",
+ ],
+ "Swedish": [
+ "e",
+ "a",
+ "n",
+ "r",
+ "t",
+ "s",
+ "i",
+ "l",
+ "d",
+ "o",
+ "m",
+ "k",
+ "g",
+ "v",
+ "h",
+ "f",
+ "u",
+ "p",
+ "ä",
+ "c",
+ "b",
+ "ö",
+ "å",
+ "y",
+ "j",
+ "x",
+ ],
+ "Chinese": [
+ "的",
+ "一",
+ "是",
+ "不",
+ "了",
+ "在",
+ "人",
+ "有",
+ "我",
+ "他",
+ "这",
+ "个",
+ "们",
+ "中",
+ "来",
+ "上",
+ "大",
+ "为",
+ "和",
+ "国",
+ "地",
+ "到",
+ "以",
+ "说",
+ "时",
+ "要",
+ "就",
+ "出",
+ "会",
+ "可",
+ "也",
+ "你",
+ "对",
+ "生",
+ "能",
+ "而",
+ "子",
+ "那",
+ "得",
+ "于",
+ "着",
+ "下",
+ "自",
+ "之",
+ "年",
+ "过",
+ "发",
+ "后",
+ "作",
+ "里",
+ "用",
+ "道",
+ "行",
+ "所",
+ "然",
+ "家",
+ "种",
+ "事",
+ "成",
+ "方",
+ "多",
+ "经",
+ "么",
+ "去",
+ "法",
+ "学",
+ "如",
+ "都",
+ "同",
+ "现",
+ "当",
+ "没",
+ "动",
+ "面",
+ "起",
+ "看",
+ "定",
+ "天",
+ "分",
+ "还",
+ "进",
+ "好",
+ "小",
+ "部",
+ "其",
+ "些",
+ "主",
+ "样",
+ "理",
+ "心",
+ "她",
+ "本",
+ "前",
+ "开",
+ "但",
+ "因",
+ "只",
+ "从",
+ "想",
+ "实",
+ ],
+ "Ukrainian": [
+ "о",
+ "а",
+ "н",
+ "і",
+ "и",
+ "р",
+ "в",
+ "т",
+ "е",
+ "с",
+ "к",
+ "л",
+ "у",
+ "д",
+ "м",
+ "п",
+ "з",
+ "я",
+ "ь",
+ "б",
+ "г",
+ "й",
+ "ч",
+ "х",
+ "ц",
+ "ї",
+ ],
+ "Norwegian": [
+ "e",
+ "r",
+ "n",
+ "t",
+ "a",
+ "s",
+ "i",
+ "o",
+ "l",
+ "d",
+ "g",
+ "k",
+ "m",
+ "v",
+ "f",
+ "p",
+ "u",
+ "b",
+ "h",
+ "å",
+ "y",
+ "j",
+ "ø",
+ "c",
+ "æ",
+ "w",
+ ],
+ "Finnish": [
+ "a",
+ "i",
+ "n",
+ "t",
+ "e",
+ "s",
+ "l",
+ "o",
+ "u",
+ "k",
+ "ä",
+ "m",
+ "r",
+ "v",
+ "j",
+ "h",
+ "p",
+ "y",
+ "d",
+ "ö",
+ "g",
+ "c",
+ "b",
+ "f",
+ "w",
+ "z",
+ ],
+ "Vietnamese": [
+ "n",
+ "h",
+ "t",
+ "i",
+ "c",
+ "g",
+ "a",
+ "o",
+ "u",
+ "m",
+ "l",
+ "r",
+ "à",
+ "đ",
+ "s",
+ "e",
+ "v",
+ "p",
+ "b",
+ "y",
+ "ư",
+ "d",
+ "á",
+ "k",
+ "ộ",
+ "ế",
+ ],
+ "Czech": [
+ "o",
+ "e",
+ "a",
+ "n",
+ "t",
+ "s",
+ "i",
+ "l",
+ "v",
+ "r",
+ "k",
+ "d",
+ "u",
+ "m",
+ "p",
+ "í",
+ "c",
+ "h",
+ "z",
+ "á",
+ "y",
+ "j",
+ "b",
+ "ě",
+ "é",
+ "ř",
+ ],
+ "Hungarian": [
+ "e",
+ "a",
+ "t",
+ "l",
+ "s",
+ "n",
+ "k",
+ "r",
+ "i",
+ "o",
+ "z",
+ "á",
+ "é",
+ "g",
+ "m",
+ "b",
+ "y",
+ "v",
+ "d",
+ "h",
+ "u",
+ "p",
+ "j",
+ "ö",
+ "f",
+ "c",
+ ],
+ "Korean": [
+ "이",
+ "다",
+ "에",
+ "의",
+ "는",
+ "로",
+ "하",
+ "을",
+ "가",
+ "고",
+ "지",
+ "서",
+ "한",
+ "은",
+ "기",
+ "으",
+ "년",
+ "대",
+ "사",
+ "시",
+ "를",
+ "리",
+ "도",
+ "인",
+ "스",
+ "일",
+ ],
+ "Indonesian": [
+ "a",
+ "n",
+ "e",
+ "i",
+ "r",
+ "t",
+ "u",
+ "s",
+ "d",
+ "k",
+ "m",
+ "l",
+ "g",
+ "p",
+ "b",
+ "o",
+ "h",
+ "y",
+ "j",
+ "c",
+ "w",
+ "f",
+ "v",
+ "z",
+ "x",
+ "q",
+ ],
+ "Turkish": [
+ "a",
+ "e",
+ "i",
+ "n",
+ "r",
+ "l",
+ "ı",
+ "k",
+ "d",
+ "t",
+ "s",
+ "m",
+ "y",
+ "u",
+ "o",
+ "b",
+ "ü",
+ "ş",
+ "v",
+ "g",
+ "z",
+ "h",
+ "c",
+ "p",
+ "ç",
+ "ğ",
+ ],
+ "Romanian": [
+ "e",
+ "i",
+ "a",
+ "r",
+ "n",
+ "t",
+ "u",
+ "l",
+ "o",
+ "c",
+ "s",
+ "d",
+ "p",
+ "m",
+ "ă",
+ "f",
+ "v",
+ "î",
+ "g",
+ "b",
+ "ș",
+ "ț",
+ "z",
+ "h",
+ "â",
+ "j",
+ ],
+ "Farsi": [
+ "ا",
+ "ی",
+ "ر",
+ "د",
+ "ن",
+ "ه",
+ "و",
+ "م",
+ "ت",
+ "ب",
+ "س",
+ "ل",
+ "ک",
+ "ش",
+ "ز",
+ "ف",
+ "گ",
+ "ع",
+ "خ",
+ "ق",
+ "ج",
+ "آ",
+ "پ",
+ "ح",
+ "ط",
+ "ص",
+ ],
+ "Arabic": [
+ "ا",
+ "ل",
+ "ي",
+ "م",
+ "و",
+ "ن",
+ "ر",
+ "ت",
+ "ب",
+ "ة",
+ "ع",
+ "د",
+ "س",
+ "ف",
+ "ه",
+ "ك",
+ "ق",
+ "أ",
+ "ح",
+ "ج",
+ "ش",
+ "ط",
+ "ص",
+ "ى",
+ "خ",
+ "إ",
+ ],
+ "Danish": [
+ "e",
+ "r",
+ "n",
+ "t",
+ "a",
+ "i",
+ "s",
+ "d",
+ "l",
+ "o",
+ "g",
+ "m",
+ "k",
+ "f",
+ "v",
+ "u",
+ "b",
+ "h",
+ "p",
+ "å",
+ "y",
+ "ø",
+ "æ",
+ "c",
+ "j",
+ "w",
+ ],
+ "Serbian": [
+ "а",
+ "и",
+ "о",
+ "е",
+ "н",
+ "р",
+ "с",
+ "у",
+ "т",
+ "к",
+ "ј",
+ "в",
+ "д",
+ "м",
+ "п",
+ "л",
+ "г",
+ "з",
+ "б",
+ "a",
+ "i",
+ "e",
+ "o",
+ "n",
+ "ц",
+ "ш",
+ ],
+ "Lithuanian": [
+ "i",
+ "a",
+ "s",
+ "o",
+ "r",
+ "e",
+ "t",
+ "n",
+ "u",
+ "k",
+ "m",
+ "l",
+ "p",
+ "v",
+ "d",
+ "j",
+ "g",
+ "ė",
+ "b",
+ "y",
+ "ų",
+ "š",
+ "ž",
+ "c",
+ "ą",
+ "į",
+ ],
+ "Slovene": [
+ "e",
+ "a",
+ "i",
+ "o",
+ "n",
+ "r",
+ "s",
+ "l",
+ "t",
+ "j",
+ "v",
+ "k",
+ "d",
+ "p",
+ "m",
+ "u",
+ "z",
+ "b",
+ "g",
+ "h",
+ "č",
+ "c",
+ "š",
+ "ž",
+ "f",
+ "y",
+ ],
+ "Slovak": [
+ "o",
+ "a",
+ "e",
+ "n",
+ "i",
+ "r",
+ "v",
+ "t",
+ "s",
+ "l",
+ "k",
+ "d",
+ "m",
+ "p",
+ "u",
+ "c",
+ "h",
+ "j",
+ "b",
+ "z",
+ "á",
+ "y",
+ "ý",
+ "í",
+ "č",
+ "é",
+ ],
+ "Hebrew": [
+ "י",
+ "ו",
+ "ה",
+ "ל",
+ "ר",
+ "ב",
+ "ת",
+ "מ",
+ "א",
+ "ש",
+ "נ",
+ "ע",
+ "ם",
+ "ד",
+ "ק",
+ "ח",
+ "פ",
+ "ס",
+ "כ",
+ "ג",
+ "ט",
+ "צ",
+ "ן",
+ "ז",
+ "ך",
+ ],
+ "Bulgarian": [
+ "а",
+ "и",
+ "о",
+ "е",
+ "н",
+ "т",
+ "р",
+ "с",
+ "в",
+ "л",
+ "к",
+ "д",
+ "п",
+ "м",
+ "з",
+ "г",
+ "я",
+ "ъ",
+ "у",
+ "б",
+ "ч",
+ "ц",
+ "й",
+ "ж",
+ "щ",
+ "х",
+ ],
+ "Croatian": [
+ "a",
+ "i",
+ "o",
+ "e",
+ "n",
+ "r",
+ "j",
+ "s",
+ "t",
+ "u",
+ "k",
+ "l",
+ "v",
+ "d",
+ "m",
+ "p",
+ "g",
+ "z",
+ "b",
+ "c",
+ "č",
+ "h",
+ "š",
+ "ž",
+ "ć",
+ "f",
+ ],
+ "Hindi": [
+ "क",
+ "र",
+ "स",
+ "न",
+ "त",
+ "म",
+ "ह",
+ "प",
+ "य",
+ "ल",
+ "व",
+ "ज",
+ "द",
+ "ग",
+ "ब",
+ "श",
+ "ट",
+ "अ",
+ "ए",
+ "थ",
+ "भ",
+ "ड",
+ "च",
+ "ध",
+ "ष",
+ "इ",
+ ],
+ "Estonian": [
+ "a",
+ "i",
+ "e",
+ "s",
+ "t",
+ "l",
+ "u",
+ "n",
+ "o",
+ "k",
+ "r",
+ "d",
+ "m",
+ "v",
+ "g",
+ "p",
+ "j",
+ "h",
+ "ä",
+ "b",
+ "õ",
+ "ü",
+ "f",
+ "c",
+ "ö",
+ "y",
+ ],
+ "Thai": [
+ "า",
+ "น",
+ "ร",
+ "อ",
+ "ก",
+ "เ",
+ "ง",
+ "ม",
+ "ย",
+ "ล",
+ "ว",
+ "ด",
+ "ท",
+ "ส",
+ "ต",
+ "ะ",
+ "ป",
+ "บ",
+ "ค",
+ "ห",
+ "แ",
+ "จ",
+ "พ",
+ "ช",
+ "ข",
+ "ใ",
+ ],
+ "Greek": [
+ "α",
+ "τ",
+ "ο",
+ "ι",
+ "ε",
+ "ν",
+ "ρ",
+ "σ",
+ "κ",
+ "η",
+ "π",
+ "ς",
+ "υ",
+ "μ",
+ "λ",
+ "ί",
+ "ό",
+ "ά",
+ "γ",
+ "έ",
+ "δ",
+ "ή",
+ "ω",
+ "χ",
+ "θ",
+ "ύ",
+ ],
+ "Tamil": [
+ "க",
+ "த",
+ "ப",
+ "ட",
+ "ர",
+ "ம",
+ "ல",
+ "ன",
+ "வ",
+ "ற",
+ "ய",
+ "ள",
+ "ச",
+ "ந",
+ "இ",
+ "ண",
+ "அ",
+ "ஆ",
+ "ழ",
+ "ங",
+ "எ",
+ "உ",
+ "ஒ",
+ "ஸ",
+ ],
+ "Kazakh": [
+ "а",
+ "ы",
+ "е",
+ "н",
+ "т",
+ "р",
+ "л",
+ "і",
+ "д",
+ "с",
+ "м",
+ "қ",
+ "к",
+ "о",
+ "б",
+ "и",
+ "у",
+ "ғ",
+ "ж",
+ "ң",
+ "з",
+ "ш",
+ "й",
+ "п",
+ "г",
+ "ө",
+ ],
+}
+
+LANGUAGE_SUPPORTED_COUNT: int = len(FREQUENCIES)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/legacy.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/legacy.py
new file mode 100644
index 0000000..360a310
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/legacy.py
@@ -0,0 +1,80 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Any
+from warnings import warn
+
+from .api import from_bytes
+from .constant import CHARDET_CORRESPONDENCE, TOO_SMALL_SEQUENCE
+
+# TODO: remove this check when dropping Python 3.7 support
+if TYPE_CHECKING:
+ from typing_extensions import TypedDict
+
+ class ResultDict(TypedDict):
+ encoding: str | None
+ language: str
+ confidence: float | None
+
+
+def detect(
+ byte_str: bytes, should_rename_legacy: bool = False, **kwargs: Any
+) -> ResultDict:
+ """
+ chardet legacy method
+ Detect the encoding of the given byte string. It should be mostly backward-compatible.
+ Encoding name will match Chardet own writing whenever possible. (Not on encoding name unsupported by it)
+ This function is deprecated and should be used to migrate your project easily, consult the documentation for
+ further information. Not planned for removal.
+
+ :param byte_str: The byte sequence to examine.
+ :param should_rename_legacy: Should we rename legacy encodings
+ to their more modern equivalents?
+ """
+ if len(kwargs):
+ warn(
+ f"charset-normalizer disregard arguments '{','.join(list(kwargs.keys()))}' in legacy function detect()"
+ )
+
+ if not isinstance(byte_str, (bytearray, bytes)):
+ raise TypeError( # pragma: nocover
+ f"Expected object of type bytes or bytearray, got: {type(byte_str)}"
+ )
+
+ if isinstance(byte_str, bytearray):
+ byte_str = bytes(byte_str)
+
+ r = from_bytes(byte_str).best()
+
+ encoding = r.encoding if r is not None else None
+ language = r.language if r is not None and r.language != "Unknown" else ""
+ confidence = 1.0 - r.chaos if r is not None else None
+
+ # automatically lower confidence
+ # on small bytes samples.
+ # https://github.com/jawah/charset_normalizer/issues/391
+ if (
+ confidence is not None
+ and confidence >= 0.9
+ and encoding
+ not in {
+ "utf_8",
+ "ascii",
+ }
+ and r.bom is False # type: ignore[union-attr]
+ and len(byte_str) < TOO_SMALL_SEQUENCE
+ ):
+ confidence -= 0.2
+
+ # Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process
+ # but chardet does return 'utf-8-sig' and it is a valid codec name.
+ if r is not None and encoding == "utf_8" and r.bom:
+ encoding += "_sig"
+
+ if should_rename_legacy is False and encoding in CHARDET_CORRESPONDENCE:
+ encoding = CHARDET_CORRESPONDENCE[encoding]
+
+ return {
+ "encoding": encoding,
+ "language": language,
+ "confidence": confidence,
+ }
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.cpython-312-x86_64-linux-gnu.so b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.cpython-312-x86_64-linux-gnu.so
new file mode 100755
index 0000000..857d747
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.cpython-312-x86_64-linux-gnu.so differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.py
new file mode 100644
index 0000000..12ce024
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md.py
@@ -0,0 +1,635 @@
+from __future__ import annotations
+
+from functools import lru_cache
+from logging import getLogger
+
+from .constant import (
+ COMMON_SAFE_ASCII_CHARACTERS,
+ TRACE,
+ UNICODE_SECONDARY_RANGE_KEYWORD,
+)
+from .utils import (
+ is_accentuated,
+ is_arabic,
+ is_arabic_isolated_form,
+ is_case_variable,
+ is_cjk,
+ is_emoticon,
+ is_hangul,
+ is_hiragana,
+ is_katakana,
+ is_latin,
+ is_punctuation,
+ is_separator,
+ is_symbol,
+ is_thai,
+ is_unprintable,
+ remove_accent,
+ unicode_range,
+ is_cjk_uncommon,
+)
+
+
+class MessDetectorPlugin:
+ """
+ Base abstract class used for mess detection plugins.
+ All detectors MUST extend and implement given methods.
+ """
+
+ def eligible(self, character: str) -> bool:
+ """
+ Determine if given character should be fed in.
+ """
+ raise NotImplementedError # pragma: nocover
+
+ def feed(self, character: str) -> None:
+ """
+ The main routine to be executed upon character.
+ Insert the logic in witch the text would be considered chaotic.
+ """
+ raise NotImplementedError # pragma: nocover
+
+ def reset(self) -> None: # pragma: no cover
+ """
+ Permit to reset the plugin to the initial state.
+ """
+ raise NotImplementedError
+
+ @property
+ def ratio(self) -> float:
+ """
+ Compute the chaos ratio based on what your feed() has seen.
+ Must NOT be lower than 0.; No restriction gt 0.
+ """
+ raise NotImplementedError # pragma: nocover
+
+
+class TooManySymbolOrPunctuationPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._punctuation_count: int = 0
+ self._symbol_count: int = 0
+ self._character_count: int = 0
+
+ self._last_printable_char: str | None = None
+ self._frenzy_symbol_in_word: bool = False
+
+ def eligible(self, character: str) -> bool:
+ return character.isprintable()
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+
+ if (
+ character != self._last_printable_char
+ and character not in COMMON_SAFE_ASCII_CHARACTERS
+ ):
+ if is_punctuation(character):
+ self._punctuation_count += 1
+ elif (
+ character.isdigit() is False
+ and is_symbol(character)
+ and is_emoticon(character) is False
+ ):
+ self._symbol_count += 2
+
+ self._last_printable_char = character
+
+ def reset(self) -> None: # Abstract
+ self._punctuation_count = 0
+ self._character_count = 0
+ self._symbol_count = 0
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count == 0:
+ return 0.0
+
+ ratio_of_punctuation: float = (
+ self._punctuation_count + self._symbol_count
+ ) / self._character_count
+
+ return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0
+
+
+class TooManyAccentuatedPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._character_count: int = 0
+ self._accentuated_count: int = 0
+
+ def eligible(self, character: str) -> bool:
+ return character.isalpha()
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+
+ if is_accentuated(character):
+ self._accentuated_count += 1
+
+ def reset(self) -> None: # Abstract
+ self._character_count = 0
+ self._accentuated_count = 0
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count < 8:
+ return 0.0
+
+ ratio_of_accentuation: float = self._accentuated_count / self._character_count
+ return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0
+
+
+class UnprintablePlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._unprintable_count: int = 0
+ self._character_count: int = 0
+
+ def eligible(self, character: str) -> bool:
+ return True
+
+ def feed(self, character: str) -> None:
+ if is_unprintable(character):
+ self._unprintable_count += 1
+ self._character_count += 1
+
+ def reset(self) -> None: # Abstract
+ self._unprintable_count = 0
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count == 0:
+ return 0.0
+
+ return (self._unprintable_count * 8) / self._character_count
+
+
+class SuspiciousDuplicateAccentPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._successive_count: int = 0
+ self._character_count: int = 0
+
+ self._last_latin_character: str | None = None
+
+ def eligible(self, character: str) -> bool:
+ return character.isalpha() and is_latin(character)
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+ if (
+ self._last_latin_character is not None
+ and is_accentuated(character)
+ and is_accentuated(self._last_latin_character)
+ ):
+ if character.isupper() and self._last_latin_character.isupper():
+ self._successive_count += 1
+ # Worse if its the same char duplicated with different accent.
+ if remove_accent(character) == remove_accent(self._last_latin_character):
+ self._successive_count += 1
+ self._last_latin_character = character
+
+ def reset(self) -> None: # Abstract
+ self._successive_count = 0
+ self._character_count = 0
+ self._last_latin_character = None
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count == 0:
+ return 0.0
+
+ return (self._successive_count * 2) / self._character_count
+
+
+class SuspiciousRange(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._suspicious_successive_range_count: int = 0
+ self._character_count: int = 0
+ self._last_printable_seen: str | None = None
+
+ def eligible(self, character: str) -> bool:
+ return character.isprintable()
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+
+ if (
+ character.isspace()
+ or is_punctuation(character)
+ or character in COMMON_SAFE_ASCII_CHARACTERS
+ ):
+ self._last_printable_seen = None
+ return
+
+ if self._last_printable_seen is None:
+ self._last_printable_seen = character
+ return
+
+ unicode_range_a: str | None = unicode_range(self._last_printable_seen)
+ unicode_range_b: str | None = unicode_range(character)
+
+ if is_suspiciously_successive_range(unicode_range_a, unicode_range_b):
+ self._suspicious_successive_range_count += 1
+
+ self._last_printable_seen = character
+
+ def reset(self) -> None: # Abstract
+ self._character_count = 0
+ self._suspicious_successive_range_count = 0
+ self._last_printable_seen = None
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count <= 13:
+ return 0.0
+
+ ratio_of_suspicious_range_usage: float = (
+ self._suspicious_successive_range_count * 2
+ ) / self._character_count
+
+ return ratio_of_suspicious_range_usage
+
+
+class SuperWeirdWordPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._word_count: int = 0
+ self._bad_word_count: int = 0
+ self._foreign_long_count: int = 0
+
+ self._is_current_word_bad: bool = False
+ self._foreign_long_watch: bool = False
+
+ self._character_count: int = 0
+ self._bad_character_count: int = 0
+
+ self._buffer: str = ""
+ self._buffer_accent_count: int = 0
+ self._buffer_glyph_count: int = 0
+
+ def eligible(self, character: str) -> bool:
+ return True
+
+ def feed(self, character: str) -> None:
+ if character.isalpha():
+ self._buffer += character
+ if is_accentuated(character):
+ self._buffer_accent_count += 1
+ if (
+ self._foreign_long_watch is False
+ and (is_latin(character) is False or is_accentuated(character))
+ and is_cjk(character) is False
+ and is_hangul(character) is False
+ and is_katakana(character) is False
+ and is_hiragana(character) is False
+ and is_thai(character) is False
+ ):
+ self._foreign_long_watch = True
+ if (
+ is_cjk(character)
+ or is_hangul(character)
+ or is_katakana(character)
+ or is_hiragana(character)
+ or is_thai(character)
+ ):
+ self._buffer_glyph_count += 1
+ return
+ if not self._buffer:
+ return
+ if (
+ character.isspace() or is_punctuation(character) or is_separator(character)
+ ) and self._buffer:
+ self._word_count += 1
+ buffer_length: int = len(self._buffer)
+
+ self._character_count += buffer_length
+
+ if buffer_length >= 4:
+ if self._buffer_accent_count / buffer_length >= 0.5:
+ self._is_current_word_bad = True
+ # Word/Buffer ending with an upper case accentuated letter are so rare,
+ # that we will consider them all as suspicious. Same weight as foreign_long suspicious.
+ elif (
+ is_accentuated(self._buffer[-1])
+ and self._buffer[-1].isupper()
+ and all(_.isupper() for _ in self._buffer) is False
+ ):
+ self._foreign_long_count += 1
+ self._is_current_word_bad = True
+ elif self._buffer_glyph_count == 1:
+ self._is_current_word_bad = True
+ self._foreign_long_count += 1
+ if buffer_length >= 24 and self._foreign_long_watch:
+ camel_case_dst = [
+ i
+ for c, i in zip(self._buffer, range(0, buffer_length))
+ if c.isupper()
+ ]
+ probable_camel_cased: bool = False
+
+ if camel_case_dst and (len(camel_case_dst) / buffer_length <= 0.3):
+ probable_camel_cased = True
+
+ if not probable_camel_cased:
+ self._foreign_long_count += 1
+ self._is_current_word_bad = True
+
+ if self._is_current_word_bad:
+ self._bad_word_count += 1
+ self._bad_character_count += len(self._buffer)
+ self._is_current_word_bad = False
+
+ self._foreign_long_watch = False
+ self._buffer = ""
+ self._buffer_accent_count = 0
+ self._buffer_glyph_count = 0
+ elif (
+ character not in {"<", ">", "-", "=", "~", "|", "_"}
+ and character.isdigit() is False
+ and is_symbol(character)
+ ):
+ self._is_current_word_bad = True
+ self._buffer += character
+
+ def reset(self) -> None: # Abstract
+ self._buffer = ""
+ self._is_current_word_bad = False
+ self._foreign_long_watch = False
+ self._bad_word_count = 0
+ self._word_count = 0
+ self._character_count = 0
+ self._bad_character_count = 0
+ self._foreign_long_count = 0
+
+ @property
+ def ratio(self) -> float:
+ if self._word_count <= 10 and self._foreign_long_count == 0:
+ return 0.0
+
+ return self._bad_character_count / self._character_count
+
+
+class CjkUncommonPlugin(MessDetectorPlugin):
+ """
+ Detect messy CJK text that probably means nothing.
+ """
+
+ def __init__(self) -> None:
+ self._character_count: int = 0
+ self._uncommon_count: int = 0
+
+ def eligible(self, character: str) -> bool:
+ return is_cjk(character)
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+
+ if is_cjk_uncommon(character):
+ self._uncommon_count += 1
+ return
+
+ def reset(self) -> None: # Abstract
+ self._character_count = 0
+ self._uncommon_count = 0
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count < 8:
+ return 0.0
+
+ uncommon_form_usage: float = self._uncommon_count / self._character_count
+
+ # we can be pretty sure it's garbage when uncommon characters are widely
+ # used. otherwise it could just be traditional chinese for example.
+ return uncommon_form_usage / 10 if uncommon_form_usage > 0.5 else 0.0
+
+
+class ArchaicUpperLowerPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._buf: bool = False
+
+ self._character_count_since_last_sep: int = 0
+
+ self._successive_upper_lower_count: int = 0
+ self._successive_upper_lower_count_final: int = 0
+
+ self._character_count: int = 0
+
+ self._last_alpha_seen: str | None = None
+ self._current_ascii_only: bool = True
+
+ def eligible(self, character: str) -> bool:
+ return True
+
+ def feed(self, character: str) -> None:
+ is_concerned = character.isalpha() and is_case_variable(character)
+ chunk_sep = is_concerned is False
+
+ if chunk_sep and self._character_count_since_last_sep > 0:
+ if (
+ self._character_count_since_last_sep <= 64
+ and character.isdigit() is False
+ and self._current_ascii_only is False
+ ):
+ self._successive_upper_lower_count_final += (
+ self._successive_upper_lower_count
+ )
+
+ self._successive_upper_lower_count = 0
+ self._character_count_since_last_sep = 0
+ self._last_alpha_seen = None
+ self._buf = False
+ self._character_count += 1
+ self._current_ascii_only = True
+
+ return
+
+ if self._current_ascii_only is True and character.isascii() is False:
+ self._current_ascii_only = False
+
+ if self._last_alpha_seen is not None:
+ if (character.isupper() and self._last_alpha_seen.islower()) or (
+ character.islower() and self._last_alpha_seen.isupper()
+ ):
+ if self._buf is True:
+ self._successive_upper_lower_count += 2
+ self._buf = False
+ else:
+ self._buf = True
+ else:
+ self._buf = False
+
+ self._character_count += 1
+ self._character_count_since_last_sep += 1
+ self._last_alpha_seen = character
+
+ def reset(self) -> None: # Abstract
+ self._character_count = 0
+ self._character_count_since_last_sep = 0
+ self._successive_upper_lower_count = 0
+ self._successive_upper_lower_count_final = 0
+ self._last_alpha_seen = None
+ self._buf = False
+ self._current_ascii_only = True
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count == 0:
+ return 0.0
+
+ return self._successive_upper_lower_count_final / self._character_count
+
+
+class ArabicIsolatedFormPlugin(MessDetectorPlugin):
+ def __init__(self) -> None:
+ self._character_count: int = 0
+ self._isolated_form_count: int = 0
+
+ def reset(self) -> None: # Abstract
+ self._character_count = 0
+ self._isolated_form_count = 0
+
+ def eligible(self, character: str) -> bool:
+ return is_arabic(character)
+
+ def feed(self, character: str) -> None:
+ self._character_count += 1
+
+ if is_arabic_isolated_form(character):
+ self._isolated_form_count += 1
+
+ @property
+ def ratio(self) -> float:
+ if self._character_count < 8:
+ return 0.0
+
+ isolated_form_usage: float = self._isolated_form_count / self._character_count
+
+ return isolated_form_usage
+
+
+@lru_cache(maxsize=1024)
+def is_suspiciously_successive_range(
+ unicode_range_a: str | None, unicode_range_b: str | None
+) -> bool:
+ """
+ Determine if two Unicode range seen next to each other can be considered as suspicious.
+ """
+ if unicode_range_a is None or unicode_range_b is None:
+ return True
+
+ if unicode_range_a == unicode_range_b:
+ return False
+
+ if "Latin" in unicode_range_a and "Latin" in unicode_range_b:
+ return False
+
+ if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b:
+ return False
+
+ # Latin characters can be accompanied with a combining diacritical mark
+ # eg. Vietnamese.
+ if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and (
+ "Combining" in unicode_range_a or "Combining" in unicode_range_b
+ ):
+ return False
+
+ keywords_range_a, keywords_range_b = (
+ unicode_range_a.split(" "),
+ unicode_range_b.split(" "),
+ )
+
+ for el in keywords_range_a:
+ if el in UNICODE_SECONDARY_RANGE_KEYWORD:
+ continue
+ if el in keywords_range_b:
+ return False
+
+ # Japanese Exception
+ range_a_jp_chars, range_b_jp_chars = (
+ unicode_range_a
+ in (
+ "Hiragana",
+ "Katakana",
+ ),
+ unicode_range_b in ("Hiragana", "Katakana"),
+ )
+ if (range_a_jp_chars or range_b_jp_chars) and (
+ "CJK" in unicode_range_a or "CJK" in unicode_range_b
+ ):
+ return False
+ if range_a_jp_chars and range_b_jp_chars:
+ return False
+
+ if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b:
+ if "CJK" in unicode_range_a or "CJK" in unicode_range_b:
+ return False
+ if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":
+ return False
+
+ # Chinese/Japanese use dedicated range for punctuation and/or separators.
+ if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or (
+ unicode_range_a in ["Katakana", "Hiragana"]
+ and unicode_range_b in ["Katakana", "Hiragana"]
+ ):
+ if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b:
+ return False
+ if "Forms" in unicode_range_a or "Forms" in unicode_range_b:
+ return False
+ if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":
+ return False
+
+ return True
+
+
+@lru_cache(maxsize=2048)
+def mess_ratio(
+ decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False
+) -> float:
+ """
+ Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier.
+ """
+
+ detectors: list[MessDetectorPlugin] = [
+ md_class() for md_class in MessDetectorPlugin.__subclasses__()
+ ]
+
+ length: int = len(decoded_sequence) + 1
+
+ mean_mess_ratio: float = 0.0
+
+ if length < 512:
+ intermediary_mean_mess_ratio_calc: int = 32
+ elif length <= 1024:
+ intermediary_mean_mess_ratio_calc = 64
+ else:
+ intermediary_mean_mess_ratio_calc = 128
+
+ for character, index in zip(decoded_sequence + "\n", range(length)):
+ for detector in detectors:
+ if detector.eligible(character):
+ detector.feed(character)
+
+ if (
+ index > 0 and index % intermediary_mean_mess_ratio_calc == 0
+ ) or index == length - 1:
+ mean_mess_ratio = sum(dt.ratio for dt in detectors)
+
+ if mean_mess_ratio >= maximum_threshold:
+ break
+
+ if debug:
+ logger = getLogger("charset_normalizer")
+
+ logger.log(
+ TRACE,
+ "Mess-detector extended-analysis start. "
+ f"intermediary_mean_mess_ratio_calc={intermediary_mean_mess_ratio_calc} mean_mess_ratio={mean_mess_ratio} "
+ f"maximum_threshold={maximum_threshold}",
+ )
+
+ if len(decoded_sequence) > 16:
+ logger.log(TRACE, f"Starting with: {decoded_sequence[:16]}")
+ logger.log(TRACE, f"Ending with: {decoded_sequence[-16::]}")
+
+ for dt in detectors:
+ logger.log(TRACE, f"{dt.__class__}: {dt.ratio}")
+
+ return round(mean_mess_ratio, 3)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md__mypyc.cpython-312-x86_64-linux-gnu.so b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md__mypyc.cpython-312-x86_64-linux-gnu.so
new file mode 100755
index 0000000..d5091af
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/md__mypyc.cpython-312-x86_64-linux-gnu.so differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/models.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/models.py
new file mode 100644
index 0000000..1042758
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/models.py
@@ -0,0 +1,360 @@
+from __future__ import annotations
+
+from encodings.aliases import aliases
+from hashlib import sha256
+from json import dumps
+from re import sub
+from typing import Any, Iterator, List, Tuple
+
+from .constant import RE_POSSIBLE_ENCODING_INDICATION, TOO_BIG_SEQUENCE
+from .utils import iana_name, is_multi_byte_encoding, unicode_range
+
+
+class CharsetMatch:
+ def __init__(
+ self,
+ payload: bytes,
+ guessed_encoding: str,
+ mean_mess_ratio: float,
+ has_sig_or_bom: bool,
+ languages: CoherenceMatches,
+ decoded_payload: str | None = None,
+ preemptive_declaration: str | None = None,
+ ):
+ self._payload: bytes = payload
+
+ self._encoding: str = guessed_encoding
+ self._mean_mess_ratio: float = mean_mess_ratio
+ self._languages: CoherenceMatches = languages
+ self._has_sig_or_bom: bool = has_sig_or_bom
+ self._unicode_ranges: list[str] | None = None
+
+ self._leaves: list[CharsetMatch] = []
+ self._mean_coherence_ratio: float = 0.0
+
+ self._output_payload: bytes | None = None
+ self._output_encoding: str | None = None
+
+ self._string: str | None = decoded_payload
+
+ self._preemptive_declaration: str | None = preemptive_declaration
+
+ def __eq__(self, other: object) -> bool:
+ if not isinstance(other, CharsetMatch):
+ if isinstance(other, str):
+ return iana_name(other) == self.encoding
+ return False
+ return self.encoding == other.encoding and self.fingerprint == other.fingerprint
+
+ def __lt__(self, other: object) -> bool:
+ """
+ Implemented to make sorted available upon CharsetMatches items.
+ """
+ if not isinstance(other, CharsetMatch):
+ raise ValueError
+
+ chaos_difference: float = abs(self.chaos - other.chaos)
+ coherence_difference: float = abs(self.coherence - other.coherence)
+
+ # Below 1% difference --> Use Coherence
+ if chaos_difference < 0.01 and coherence_difference > 0.02:
+ return self.coherence > other.coherence
+ elif chaos_difference < 0.01 and coherence_difference <= 0.02:
+ # When having a difficult decision, use the result that decoded as many multi-byte as possible.
+ # preserve RAM usage!
+ if len(self._payload) >= TOO_BIG_SEQUENCE:
+ return self.chaos < other.chaos
+ return self.multi_byte_usage > other.multi_byte_usage
+
+ return self.chaos < other.chaos
+
+ @property
+ def multi_byte_usage(self) -> float:
+ return 1.0 - (len(str(self)) / len(self.raw))
+
+ def __str__(self) -> str:
+ # Lazy Str Loading
+ if self._string is None:
+ self._string = str(self._payload, self._encoding, "strict")
+ return self._string
+
+ def __repr__(self) -> str:
+ return f""
+
+ def add_submatch(self, other: CharsetMatch) -> None:
+ if not isinstance(other, CharsetMatch) or other == self:
+ raise ValueError(
+ "Unable to add instance <{}> as a submatch of a CharsetMatch".format(
+ other.__class__
+ )
+ )
+
+ other._string = None # Unload RAM usage; dirty trick.
+ self._leaves.append(other)
+
+ @property
+ def encoding(self) -> str:
+ return self._encoding
+
+ @property
+ def encoding_aliases(self) -> list[str]:
+ """
+ Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855.
+ """
+ also_known_as: list[str] = []
+ for u, p in aliases.items():
+ if self.encoding == u:
+ also_known_as.append(p)
+ elif self.encoding == p:
+ also_known_as.append(u)
+ return also_known_as
+
+ @property
+ def bom(self) -> bool:
+ return self._has_sig_or_bom
+
+ @property
+ def byte_order_mark(self) -> bool:
+ return self._has_sig_or_bom
+
+ @property
+ def languages(self) -> list[str]:
+ """
+ Return the complete list of possible languages found in decoded sequence.
+ Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'.
+ """
+ return [e[0] for e in self._languages]
+
+ @property
+ def language(self) -> str:
+ """
+ Most probable language found in decoded sequence. If none were detected or inferred, the property will return
+ "Unknown".
+ """
+ if not self._languages:
+ # Trying to infer the language based on the given encoding
+ # Its either English or we should not pronounce ourselves in certain cases.
+ if "ascii" in self.could_be_from_charset:
+ return "English"
+
+ # doing it there to avoid circular import
+ from charset_normalizer.cd import encoding_languages, mb_encoding_languages
+
+ languages = (
+ mb_encoding_languages(self.encoding)
+ if is_multi_byte_encoding(self.encoding)
+ else encoding_languages(self.encoding)
+ )
+
+ if len(languages) == 0 or "Latin Based" in languages:
+ return "Unknown"
+
+ return languages[0]
+
+ return self._languages[0][0]
+
+ @property
+ def chaos(self) -> float:
+ return self._mean_mess_ratio
+
+ @property
+ def coherence(self) -> float:
+ if not self._languages:
+ return 0.0
+ return self._languages[0][1]
+
+ @property
+ def percent_chaos(self) -> float:
+ return round(self.chaos * 100, ndigits=3)
+
+ @property
+ def percent_coherence(self) -> float:
+ return round(self.coherence * 100, ndigits=3)
+
+ @property
+ def raw(self) -> bytes:
+ """
+ Original untouched bytes.
+ """
+ return self._payload
+
+ @property
+ def submatch(self) -> list[CharsetMatch]:
+ return self._leaves
+
+ @property
+ def has_submatch(self) -> bool:
+ return len(self._leaves) > 0
+
+ @property
+ def alphabets(self) -> list[str]:
+ if self._unicode_ranges is not None:
+ return self._unicode_ranges
+ # list detected ranges
+ detected_ranges: list[str | None] = [unicode_range(char) for char in str(self)]
+ # filter and sort
+ self._unicode_ranges = sorted(list({r for r in detected_ranges if r}))
+ return self._unicode_ranges
+
+ @property
+ def could_be_from_charset(self) -> list[str]:
+ """
+ The complete list of encoding that output the exact SAME str result and therefore could be the originating
+ encoding.
+ This list does include the encoding available in property 'encoding'.
+ """
+ return [self._encoding] + [m.encoding for m in self._leaves]
+
+ def output(self, encoding: str = "utf_8") -> bytes:
+ """
+ Method to get re-encoded bytes payload using given target encoding. Default to UTF-8.
+ Any errors will be simply ignored by the encoder NOT replaced.
+ """
+ if self._output_encoding is None or self._output_encoding != encoding:
+ self._output_encoding = encoding
+ decoded_string = str(self)
+ if (
+ self._preemptive_declaration is not None
+ and self._preemptive_declaration.lower()
+ not in ["utf-8", "utf8", "utf_8"]
+ ):
+ patched_header = sub(
+ RE_POSSIBLE_ENCODING_INDICATION,
+ lambda m: m.string[m.span()[0] : m.span()[1]].replace(
+ m.groups()[0],
+ iana_name(self._output_encoding).replace("_", "-"), # type: ignore[arg-type]
+ ),
+ decoded_string[:8192],
+ count=1,
+ )
+
+ decoded_string = patched_header + decoded_string[8192:]
+
+ self._output_payload = decoded_string.encode(encoding, "replace")
+
+ return self._output_payload # type: ignore
+
+ @property
+ def fingerprint(self) -> str:
+ """
+ Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one.
+ """
+ return sha256(self.output()).hexdigest()
+
+
+class CharsetMatches:
+ """
+ Container with every CharsetMatch items ordered by default from most probable to the less one.
+ Act like a list(iterable) but does not implements all related methods.
+ """
+
+ def __init__(self, results: list[CharsetMatch] | None = None):
+ self._results: list[CharsetMatch] = sorted(results) if results else []
+
+ def __iter__(self) -> Iterator[CharsetMatch]:
+ yield from self._results
+
+ def __getitem__(self, item: int | str) -> CharsetMatch:
+ """
+ Retrieve a single item either by its position or encoding name (alias may be used here).
+ Raise KeyError upon invalid index or encoding not present in results.
+ """
+ if isinstance(item, int):
+ return self._results[item]
+ if isinstance(item, str):
+ item = iana_name(item, False)
+ for result in self._results:
+ if item in result.could_be_from_charset:
+ return result
+ raise KeyError
+
+ def __len__(self) -> int:
+ return len(self._results)
+
+ def __bool__(self) -> bool:
+ return len(self._results) > 0
+
+ def append(self, item: CharsetMatch) -> None:
+ """
+ Insert a single match. Will be inserted accordingly to preserve sort.
+ Can be inserted as a submatch.
+ """
+ if not isinstance(item, CharsetMatch):
+ raise ValueError(
+ "Cannot append instance '{}' to CharsetMatches".format(
+ str(item.__class__)
+ )
+ )
+ # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage)
+ if len(item.raw) < TOO_BIG_SEQUENCE:
+ for match in self._results:
+ if match.fingerprint == item.fingerprint and match.chaos == item.chaos:
+ match.add_submatch(item)
+ return
+ self._results.append(item)
+ self._results = sorted(self._results)
+
+ def best(self) -> CharsetMatch | None:
+ """
+ Simply return the first match. Strict equivalent to matches[0].
+ """
+ if not self._results:
+ return None
+ return self._results[0]
+
+ def first(self) -> CharsetMatch | None:
+ """
+ Redundant method, call the method best(). Kept for BC reasons.
+ """
+ return self.best()
+
+
+CoherenceMatch = Tuple[str, float]
+CoherenceMatches = List[CoherenceMatch]
+
+
+class CliDetectionResult:
+ def __init__(
+ self,
+ path: str,
+ encoding: str | None,
+ encoding_aliases: list[str],
+ alternative_encodings: list[str],
+ language: str,
+ alphabets: list[str],
+ has_sig_or_bom: bool,
+ chaos: float,
+ coherence: float,
+ unicode_path: str | None,
+ is_preferred: bool,
+ ):
+ self.path: str = path
+ self.unicode_path: str | None = unicode_path
+ self.encoding: str | None = encoding
+ self.encoding_aliases: list[str] = encoding_aliases
+ self.alternative_encodings: list[str] = alternative_encodings
+ self.language: str = language
+ self.alphabets: list[str] = alphabets
+ self.has_sig_or_bom: bool = has_sig_or_bom
+ self.chaos: float = chaos
+ self.coherence: float = coherence
+ self.is_preferred: bool = is_preferred
+
+ @property
+ def __dict__(self) -> dict[str, Any]: # type: ignore
+ return {
+ "path": self.path,
+ "encoding": self.encoding,
+ "encoding_aliases": self.encoding_aliases,
+ "alternative_encodings": self.alternative_encodings,
+ "language": self.language,
+ "alphabets": self.alphabets,
+ "has_sig_or_bom": self.has_sig_or_bom,
+ "chaos": self.chaos,
+ "coherence": self.coherence,
+ "unicode_path": self.unicode_path,
+ "is_preferred": self.is_preferred,
+ }
+
+ def to_json(self) -> str:
+ return dumps(self.__dict__, ensure_ascii=True, indent=4)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/utils.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/utils.py
new file mode 100644
index 0000000..6bf0384
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/utils.py
@@ -0,0 +1,414 @@
+from __future__ import annotations
+
+import importlib
+import logging
+import unicodedata
+from codecs import IncrementalDecoder
+from encodings.aliases import aliases
+from functools import lru_cache
+from re import findall
+from typing import Generator
+
+from _multibytecodec import ( # type: ignore[import-not-found,import]
+ MultibyteIncrementalDecoder,
+)
+
+from .constant import (
+ ENCODING_MARKS,
+ IANA_SUPPORTED_SIMILAR,
+ RE_POSSIBLE_ENCODING_INDICATION,
+ UNICODE_RANGES_COMBINED,
+ UNICODE_SECONDARY_RANGE_KEYWORD,
+ UTF8_MAXIMAL_ALLOCATION,
+ COMMON_CJK_CHARACTERS,
+)
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_accentuated(character: str) -> bool:
+ try:
+ description: str = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+ return (
+ "WITH GRAVE" in description
+ or "WITH ACUTE" in description
+ or "WITH CEDILLA" in description
+ or "WITH DIAERESIS" in description
+ or "WITH CIRCUMFLEX" in description
+ or "WITH TILDE" in description
+ or "WITH MACRON" in description
+ or "WITH RING ABOVE" in description
+ )
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def remove_accent(character: str) -> str:
+ decomposed: str = unicodedata.decomposition(character)
+ if not decomposed:
+ return character
+
+ codes: list[str] = decomposed.split(" ")
+
+ return chr(int(codes[0], 16))
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def unicode_range(character: str) -> str | None:
+ """
+ Retrieve the Unicode range official name from a single character.
+ """
+ character_ord: int = ord(character)
+
+ for range_name, ord_range in UNICODE_RANGES_COMBINED.items():
+ if character_ord in ord_range:
+ return range_name
+
+ return None
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_latin(character: str) -> bool:
+ try:
+ description: str = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+ return "LATIN" in description
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_punctuation(character: str) -> bool:
+ character_category: str = unicodedata.category(character)
+
+ if "P" in character_category:
+ return True
+
+ character_range: str | None = unicode_range(character)
+
+ if character_range is None:
+ return False
+
+ return "Punctuation" in character_range
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_symbol(character: str) -> bool:
+ character_category: str = unicodedata.category(character)
+
+ if "S" in character_category or "N" in character_category:
+ return True
+
+ character_range: str | None = unicode_range(character)
+
+ if character_range is None:
+ return False
+
+ return "Forms" in character_range and character_category != "Lo"
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_emoticon(character: str) -> bool:
+ character_range: str | None = unicode_range(character)
+
+ if character_range is None:
+ return False
+
+ return "Emoticons" in character_range or "Pictographs" in character_range
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_separator(character: str) -> bool:
+ if character.isspace() or character in {"|", "+", "<", ">"}:
+ return True
+
+ character_category: str = unicodedata.category(character)
+
+ return "Z" in character_category or character_category in {"Po", "Pd", "Pc"}
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_case_variable(character: str) -> bool:
+ return character.islower() != character.isupper()
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_cjk(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "CJK" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_hiragana(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "HIRAGANA" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_katakana(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "KATAKANA" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_hangul(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "HANGUL" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_thai(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "THAI" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_arabic(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "ARABIC" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_arabic_isolated_form(character: str) -> bool:
+ try:
+ character_name = unicodedata.name(character)
+ except ValueError: # Defensive: unicode database outdated?
+ return False
+
+ return "ARABIC" in character_name and "ISOLATED FORM" in character_name
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_cjk_uncommon(character: str) -> bool:
+ return character not in COMMON_CJK_CHARACTERS
+
+
+@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED))
+def is_unicode_range_secondary(range_name: str) -> bool:
+ return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD)
+
+
+@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
+def is_unprintable(character: str) -> bool:
+ return (
+ character.isspace() is False # includes \n \t \r \v
+ and character.isprintable() is False
+ and character != "\x1a" # Why? Its the ASCII substitute character.
+ and character != "\ufeff" # bug discovered in Python,
+ # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space.
+ )
+
+
+def any_specified_encoding(sequence: bytes, search_zone: int = 8192) -> str | None:
+ """
+ Extract using ASCII-only decoder any specified encoding in the first n-bytes.
+ """
+ if not isinstance(sequence, bytes):
+ raise TypeError
+
+ seq_len: int = len(sequence)
+
+ results: list[str] = findall(
+ RE_POSSIBLE_ENCODING_INDICATION,
+ sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"),
+ )
+
+ if len(results) == 0:
+ return None
+
+ for specified_encoding in results:
+ specified_encoding = specified_encoding.lower().replace("-", "_")
+
+ encoding_alias: str
+ encoding_iana: str
+
+ for encoding_alias, encoding_iana in aliases.items():
+ if encoding_alias == specified_encoding:
+ return encoding_iana
+ if encoding_iana == specified_encoding:
+ return encoding_iana
+
+ return None
+
+
+@lru_cache(maxsize=128)
+def is_multi_byte_encoding(name: str) -> bool:
+ """
+ Verify is a specific encoding is a multi byte one based on it IANA name
+ """
+ return name in {
+ "utf_8",
+ "utf_8_sig",
+ "utf_16",
+ "utf_16_be",
+ "utf_16_le",
+ "utf_32",
+ "utf_32_le",
+ "utf_32_be",
+ "utf_7",
+ } or issubclass(
+ importlib.import_module(f"encodings.{name}").IncrementalDecoder,
+ MultibyteIncrementalDecoder,
+ )
+
+
+def identify_sig_or_bom(sequence: bytes) -> tuple[str | None, bytes]:
+ """
+ Identify and extract SIG/BOM in given sequence.
+ """
+
+ for iana_encoding in ENCODING_MARKS:
+ marks: bytes | list[bytes] = ENCODING_MARKS[iana_encoding]
+
+ if isinstance(marks, bytes):
+ marks = [marks]
+
+ for mark in marks:
+ if sequence.startswith(mark):
+ return iana_encoding, mark
+
+ return None, b""
+
+
+def should_strip_sig_or_bom(iana_encoding: str) -> bool:
+ return iana_encoding not in {"utf_16", "utf_32"}
+
+
+def iana_name(cp_name: str, strict: bool = True) -> str:
+ """Returns the Python normalized encoding name (Not the IANA official name)."""
+ cp_name = cp_name.lower().replace("-", "_")
+
+ encoding_alias: str
+ encoding_iana: str
+
+ for encoding_alias, encoding_iana in aliases.items():
+ if cp_name in [encoding_alias, encoding_iana]:
+ return encoding_iana
+
+ if strict:
+ raise ValueError(f"Unable to retrieve IANA for '{cp_name}'")
+
+ return cp_name
+
+
+def cp_similarity(iana_name_a: str, iana_name_b: str) -> float:
+ if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b):
+ return 0.0
+
+ decoder_a = importlib.import_module(f"encodings.{iana_name_a}").IncrementalDecoder
+ decoder_b = importlib.import_module(f"encodings.{iana_name_b}").IncrementalDecoder
+
+ id_a: IncrementalDecoder = decoder_a(errors="ignore")
+ id_b: IncrementalDecoder = decoder_b(errors="ignore")
+
+ character_match_count: int = 0
+
+ for i in range(255):
+ to_be_decoded: bytes = bytes([i])
+ if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded):
+ character_match_count += 1
+
+ return character_match_count / 254
+
+
+def is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool:
+ """
+ Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using
+ the function cp_similarity.
+ """
+ return (
+ iana_name_a in IANA_SUPPORTED_SIMILAR
+ and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a]
+ )
+
+
+def set_logging_handler(
+ name: str = "charset_normalizer",
+ level: int = logging.INFO,
+ format_string: str = "%(asctime)s | %(levelname)s | %(message)s",
+) -> None:
+ logger = logging.getLogger(name)
+ logger.setLevel(level)
+
+ handler = logging.StreamHandler()
+ handler.setFormatter(logging.Formatter(format_string))
+ logger.addHandler(handler)
+
+
+def cut_sequence_chunks(
+ sequences: bytes,
+ encoding_iana: str,
+ offsets: range,
+ chunk_size: int,
+ bom_or_sig_available: bool,
+ strip_sig_or_bom: bool,
+ sig_payload: bytes,
+ is_multi_byte_decoder: bool,
+ decoded_payload: str | None = None,
+) -> Generator[str, None, None]:
+ if decoded_payload and is_multi_byte_decoder is False:
+ for i in offsets:
+ chunk = decoded_payload[i : i + chunk_size]
+ if not chunk:
+ break
+ yield chunk
+ else:
+ for i in offsets:
+ chunk_end = i + chunk_size
+ if chunk_end > len(sequences) + 8:
+ continue
+
+ cut_sequence = sequences[i : i + chunk_size]
+
+ if bom_or_sig_available and strip_sig_or_bom is False:
+ cut_sequence = sig_payload + cut_sequence
+
+ chunk = cut_sequence.decode(
+ encoding_iana,
+ errors="ignore" if is_multi_byte_decoder else "strict",
+ )
+
+ # multi-byte bad cutting detector and adjustment
+ # not the cleanest way to perform that fix but clever enough for now.
+ if is_multi_byte_decoder and i > 0:
+ chunk_partial_size_chk: int = min(chunk_size, 16)
+
+ if (
+ decoded_payload
+ and chunk[:chunk_partial_size_chk] not in decoded_payload
+ ):
+ for j in range(i, i - 4, -1):
+ cut_sequence = sequences[j:chunk_end]
+
+ if bom_or_sig_available and strip_sig_or_bom is False:
+ cut_sequence = sig_payload + cut_sequence
+
+ chunk = cut_sequence.decode(encoding_iana, errors="ignore")
+
+ if chunk[:chunk_partial_size_chk] in decoded_payload:
+ break
+
+ yield chunk
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/version.py b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/version.py
new file mode 100644
index 0000000..71350e5
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/charset_normalizer/version.py
@@ -0,0 +1,8 @@
+"""
+Expose version
+"""
+
+from __future__ import annotations
+
+__version__ = "3.4.3"
+VERSION = __version__.split(".")
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/INSTALLER b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/INSTALLER
new file mode 100644
index 0000000..a1b589e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/METADATA b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/METADATA
new file mode 100644
index 0000000..534eb57
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/METADATA
@@ -0,0 +1,84 @@
+Metadata-Version: 2.4
+Name: click
+Version: 8.3.0
+Summary: Composable command line interface toolkit
+Maintainer-email: Pallets
+Requires-Python: >=3.10
+Description-Content-Type: text/markdown
+License-Expression: BSD-3-Clause
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Typing :: Typed
+License-File: LICENSE.txt
+Requires-Dist: colorama; platform_system == 'Windows'
+Project-URL: Changes, https://click.palletsprojects.com/page/changes/
+Project-URL: Chat, https://discord.gg/pallets
+Project-URL: Documentation, https://click.palletsprojects.com/
+Project-URL: Donate, https://palletsprojects.com/donate
+Project-URL: Source, https://github.com/pallets/click/
+
+
+
+# Click
+
+Click is a Python package for creating beautiful command line interfaces
+in a composable way with as little code as necessary. It's the "Command
+Line Interface Creation Kit". It's highly configurable but comes with
+sensible defaults out of the box.
+
+It aims to make the process of writing command line tools quick and fun
+while also preventing any frustration caused by the inability to
+implement an intended CLI API.
+
+Click in three points:
+
+- Arbitrary nesting of commands
+- Automatic help page generation
+- Supports lazy loading of subcommands at runtime
+
+
+## A Simple Example
+
+```python
+import click
+
+@click.command()
+@click.option("--count", default=1, help="Number of greetings.")
+@click.option("--name", prompt="Your name", help="The person to greet.")
+def hello(count, name):
+ """Simple program that greets NAME for a total of COUNT times."""
+ for _ in range(count):
+ click.echo(f"Hello, {name}!")
+
+if __name__ == '__main__':
+ hello()
+```
+
+```
+$ python hello.py --count=3
+Your name: Click
+Hello, Click!
+Hello, Click!
+Hello, Click!
+```
+
+
+## Donate
+
+The Pallets organization develops and supports Click and other popular
+packages. In order to grow the community of contributors and users, and
+allow the maintainers to devote more time to the projects, [please
+donate today][].
+
+[please donate today]: https://palletsprojects.com/donate
+
+## Contributing
+
+See our [detailed contributing documentation][contrib] for many ways to
+contribute, including reporting issues, requesting features, asking or answering
+questions, and making PRs.
+
+[contrib]: https://palletsprojects.com/contributing/
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/RECORD
new file mode 100644
index 0000000..dc3c2be
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/RECORD
@@ -0,0 +1,40 @@
+click-8.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+click-8.3.0.dist-info/METADATA,sha256=P6vpEHZ_MLBt4SO2eB-QaadcOdiznkzaZtJImRo7_V4,2621
+click-8.3.0.dist-info/RECORD,,
+click-8.3.0.dist-info/WHEEL,sha256=G2gURzTEtmeR8nrdXUJfNiB3VYVxigPQ-bEQujpNiNs,82
+click-8.3.0.dist-info/licenses/LICENSE.txt,sha256=morRBqOU6FO_4h9C9OctWSgZoigF2ZG18ydQKSkrZY0,1475
+click/__init__.py,sha256=6YyS1aeyknZ0LYweWozNZy0A9nZ_11wmYIhv3cbQrYo,4473
+click/__pycache__/__init__.cpython-312.pyc,,
+click/__pycache__/_compat.cpython-312.pyc,,
+click/__pycache__/_termui_impl.cpython-312.pyc,,
+click/__pycache__/_textwrap.cpython-312.pyc,,
+click/__pycache__/_utils.cpython-312.pyc,,
+click/__pycache__/_winconsole.cpython-312.pyc,,
+click/__pycache__/core.cpython-312.pyc,,
+click/__pycache__/decorators.cpython-312.pyc,,
+click/__pycache__/exceptions.cpython-312.pyc,,
+click/__pycache__/formatting.cpython-312.pyc,,
+click/__pycache__/globals.cpython-312.pyc,,
+click/__pycache__/parser.cpython-312.pyc,,
+click/__pycache__/shell_completion.cpython-312.pyc,,
+click/__pycache__/termui.cpython-312.pyc,,
+click/__pycache__/testing.cpython-312.pyc,,
+click/__pycache__/types.cpython-312.pyc,,
+click/__pycache__/utils.cpython-312.pyc,,
+click/_compat.py,sha256=v3xBZkFbvA1BXPRkFfBJc6-pIwPI7345m-kQEnpVAs4,18693
+click/_termui_impl.py,sha256=ktpAHyJtNkhyR-x64CQFD6xJQI11fTA3qg2AV3iCToU,26799
+click/_textwrap.py,sha256=BOae0RQ6vg3FkNgSJyOoGzG1meGMxJ_ukWVZKx_v-0o,1400
+click/_utils.py,sha256=kZwtTf5gMuCilJJceS2iTCvRvCY-0aN5rJq8gKw7p8g,943
+click/_winconsole.py,sha256=_vxUuUaxwBhoR0vUWCNuHY8VUefiMdCIyU2SXPqoF-A,8465
+click/core.py,sha256=1A5T8UoAXklIGPTJ83_DJbVi35ehtJS2FTkP_wQ7es0,128855
+click/decorators.py,sha256=5P7abhJtAQYp_KHgjUvhMv464ERwOzrv2enNknlwHyQ,18461
+click/exceptions.py,sha256=8utf8w6V5hJXMnO_ic1FNrtbwuEn1NUu1aDwV8UqnG4,9954
+click/formatting.py,sha256=RVfwwr0rwWNpgGr8NaHodPzkIr7_tUyVh_nDdanLMNc,9730
+click/globals.py,sha256=gM-Nh6A4M0HB_SgkaF5M4ncGGMDHc_flHXu9_oh4GEU,1923
+click/parser.py,sha256=Q31pH0FlQZEq-UXE_ABRzlygEfvxPTuZbWNh4xfXmzw,19010
+click/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+click/shell_completion.py,sha256=Cc4GQUFuWpfQBa9sF5qXeeYI7n3tI_1k6ZdSn4BZbT0,20994
+click/termui.py,sha256=vAYrKC2a7f_NfEIhAThEVYfa__ib5XQbTSCGtJlABRA,30847
+click/testing.py,sha256=EERbzcl1br0mW0qBS9EqkknfNfXB9WQEW0ELIpkvuSs,19102
+click/types.py,sha256=ek54BNSFwPKsqtfT7jsqcc4WHui8AIFVMKM4oVZIXhc,39927
+click/utils.py,sha256=gCUoewdAhA-QLBUUHxrLh4uj6m7T1WjZZMNPvR0I7YA,20257
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/WHEEL
new file mode 100644
index 0000000..d8b9936
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: flit 3.12.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/licenses/LICENSE.txt b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/licenses/LICENSE.txt
new file mode 100644
index 0000000..d12a849
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click-8.3.0.dist-info/licenses/LICENSE.txt
@@ -0,0 +1,28 @@
+Copyright 2014 Pallets
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/click/__init__.py
new file mode 100644
index 0000000..1aa547c
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/__init__.py
@@ -0,0 +1,123 @@
+"""
+Click is a simple Python module inspired by the stdlib optparse to make
+writing command line scripts fun. Unlike other modules, it's based
+around a simple API that does not come with too much magic and is
+composable.
+"""
+
+from __future__ import annotations
+
+from .core import Argument as Argument
+from .core import Command as Command
+from .core import CommandCollection as CommandCollection
+from .core import Context as Context
+from .core import Group as Group
+from .core import Option as Option
+from .core import Parameter as Parameter
+from .decorators import argument as argument
+from .decorators import command as command
+from .decorators import confirmation_option as confirmation_option
+from .decorators import group as group
+from .decorators import help_option as help_option
+from .decorators import make_pass_decorator as make_pass_decorator
+from .decorators import option as option
+from .decorators import pass_context as pass_context
+from .decorators import pass_obj as pass_obj
+from .decorators import password_option as password_option
+from .decorators import version_option as version_option
+from .exceptions import Abort as Abort
+from .exceptions import BadArgumentUsage as BadArgumentUsage
+from .exceptions import BadOptionUsage as BadOptionUsage
+from .exceptions import BadParameter as BadParameter
+from .exceptions import ClickException as ClickException
+from .exceptions import FileError as FileError
+from .exceptions import MissingParameter as MissingParameter
+from .exceptions import NoSuchOption as NoSuchOption
+from .exceptions import UsageError as UsageError
+from .formatting import HelpFormatter as HelpFormatter
+from .formatting import wrap_text as wrap_text
+from .globals import get_current_context as get_current_context
+from .termui import clear as clear
+from .termui import confirm as confirm
+from .termui import echo_via_pager as echo_via_pager
+from .termui import edit as edit
+from .termui import getchar as getchar
+from .termui import launch as launch
+from .termui import pause as pause
+from .termui import progressbar as progressbar
+from .termui import prompt as prompt
+from .termui import secho as secho
+from .termui import style as style
+from .termui import unstyle as unstyle
+from .types import BOOL as BOOL
+from .types import Choice as Choice
+from .types import DateTime as DateTime
+from .types import File as File
+from .types import FLOAT as FLOAT
+from .types import FloatRange as FloatRange
+from .types import INT as INT
+from .types import IntRange as IntRange
+from .types import ParamType as ParamType
+from .types import Path as Path
+from .types import STRING as STRING
+from .types import Tuple as Tuple
+from .types import UNPROCESSED as UNPROCESSED
+from .types import UUID as UUID
+from .utils import echo as echo
+from .utils import format_filename as format_filename
+from .utils import get_app_dir as get_app_dir
+from .utils import get_binary_stream as get_binary_stream
+from .utils import get_text_stream as get_text_stream
+from .utils import open_file as open_file
+
+
+def __getattr__(name: str) -> object:
+ import warnings
+
+ if name == "BaseCommand":
+ from .core import _BaseCommand
+
+ warnings.warn(
+ "'BaseCommand' is deprecated and will be removed in Click 9.0. Use"
+ " 'Command' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _BaseCommand
+
+ if name == "MultiCommand":
+ from .core import _MultiCommand
+
+ warnings.warn(
+ "'MultiCommand' is deprecated and will be removed in Click 9.0. Use"
+ " 'Group' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _MultiCommand
+
+ if name == "OptionParser":
+ from .parser import _OptionParser
+
+ warnings.warn(
+ "'OptionParser' is deprecated and will be removed in Click 9.0. The"
+ " old parser is available in 'optparse'.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _OptionParser
+
+ if name == "__version__":
+ import importlib.metadata
+ import warnings
+
+ warnings.warn(
+ "The '__version__' attribute is deprecated and will be removed in"
+ " Click 9.1. Use feature detection or"
+ " 'importlib.metadata.version(\"click\")' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return importlib.metadata.version("click")
+
+ raise AttributeError(name)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..4d4b75b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_compat.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_compat.cpython-312.pyc
new file mode 100644
index 0000000..f3e29c9
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_compat.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_termui_impl.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_termui_impl.cpython-312.pyc
new file mode 100644
index 0000000..5e93a0b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_termui_impl.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_textwrap.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_textwrap.cpython-312.pyc
new file mode 100644
index 0000000..eda4a4a
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_textwrap.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_utils.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_utils.cpython-312.pyc
new file mode 100644
index 0000000..383a479
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_utils.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_winconsole.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_winconsole.cpython-312.pyc
new file mode 100644
index 0000000..bc9c929
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/_winconsole.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/core.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/core.cpython-312.pyc
new file mode 100644
index 0000000..8392008
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/core.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/decorators.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/decorators.cpython-312.pyc
new file mode 100644
index 0000000..a15d47d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/decorators.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/exceptions.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/exceptions.cpython-312.pyc
new file mode 100644
index 0000000..5434c9c
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/exceptions.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/formatting.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/formatting.cpython-312.pyc
new file mode 100644
index 0000000..74105d7
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/formatting.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/globals.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/globals.cpython-312.pyc
new file mode 100644
index 0000000..944931a
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/globals.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/parser.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/parser.cpython-312.pyc
new file mode 100644
index 0000000..687663d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/parser.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/shell_completion.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/shell_completion.cpython-312.pyc
new file mode 100644
index 0000000..7bfdbab
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/shell_completion.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/termui.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/termui.cpython-312.pyc
new file mode 100644
index 0000000..1e91cba
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/termui.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/testing.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/testing.cpython-312.pyc
new file mode 100644
index 0000000..8fb14e3
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/testing.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/types.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/types.cpython-312.pyc
new file mode 100644
index 0000000..053d0e6
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/types.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/utils.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/utils.cpython-312.pyc
new file mode 100644
index 0000000..23322d5
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/click/__pycache__/utils.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/_compat.py b/stripe_config/.venv/lib/python3.12/site-packages/click/_compat.py
new file mode 100644
index 0000000..f2726b9
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/_compat.py
@@ -0,0 +1,622 @@
+from __future__ import annotations
+
+import codecs
+import collections.abc as cabc
+import io
+import os
+import re
+import sys
+import typing as t
+from types import TracebackType
+from weakref import WeakKeyDictionary
+
+CYGWIN = sys.platform.startswith("cygwin")
+WIN = sys.platform.startswith("win")
+auto_wrap_for_ansi: t.Callable[[t.TextIO], t.TextIO] | None = None
+_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]")
+
+
+def _make_text_stream(
+ stream: t.BinaryIO,
+ encoding: str | None,
+ errors: str | None,
+ force_readable: bool = False,
+ force_writable: bool = False,
+) -> t.TextIO:
+ if encoding is None:
+ encoding = get_best_encoding(stream)
+ if errors is None:
+ errors = "replace"
+ return _NonClosingTextIOWrapper(
+ stream,
+ encoding,
+ errors,
+ line_buffering=True,
+ force_readable=force_readable,
+ force_writable=force_writable,
+ )
+
+
+def is_ascii_encoding(encoding: str) -> bool:
+ """Checks if a given encoding is ascii."""
+ try:
+ return codecs.lookup(encoding).name == "ascii"
+ except LookupError:
+ return False
+
+
+def get_best_encoding(stream: t.IO[t.Any]) -> str:
+ """Returns the default stream encoding if not found."""
+ rv = getattr(stream, "encoding", None) or sys.getdefaultencoding()
+ if is_ascii_encoding(rv):
+ return "utf-8"
+ return rv
+
+
+class _NonClosingTextIOWrapper(io.TextIOWrapper):
+ def __init__(
+ self,
+ stream: t.BinaryIO,
+ encoding: str | None,
+ errors: str | None,
+ force_readable: bool = False,
+ force_writable: bool = False,
+ **extra: t.Any,
+ ) -> None:
+ self._stream = stream = t.cast(
+ t.BinaryIO, _FixupStream(stream, force_readable, force_writable)
+ )
+ super().__init__(stream, encoding, errors, **extra)
+
+ def __del__(self) -> None:
+ try:
+ self.detach()
+ except Exception:
+ pass
+
+ def isatty(self) -> bool:
+ # https://bitbucket.org/pypy/pypy/issue/1803
+ return self._stream.isatty()
+
+
+class _FixupStream:
+ """The new io interface needs more from streams than streams
+ traditionally implement. As such, this fix-up code is necessary in
+ some circumstances.
+
+ The forcing of readable and writable flags are there because some tools
+ put badly patched objects on sys (one such offender are certain version
+ of jupyter notebook).
+ """
+
+ def __init__(
+ self,
+ stream: t.BinaryIO,
+ force_readable: bool = False,
+ force_writable: bool = False,
+ ):
+ self._stream = stream
+ self._force_readable = force_readable
+ self._force_writable = force_writable
+
+ def __getattr__(self, name: str) -> t.Any:
+ return getattr(self._stream, name)
+
+ def read1(self, size: int) -> bytes:
+ f = getattr(self._stream, "read1", None)
+
+ if f is not None:
+ return t.cast(bytes, f(size))
+
+ return self._stream.read(size)
+
+ def readable(self) -> bool:
+ if self._force_readable:
+ return True
+ x = getattr(self._stream, "readable", None)
+ if x is not None:
+ return t.cast(bool, x())
+ try:
+ self._stream.read(0)
+ except Exception:
+ return False
+ return True
+
+ def writable(self) -> bool:
+ if self._force_writable:
+ return True
+ x = getattr(self._stream, "writable", None)
+ if x is not None:
+ return t.cast(bool, x())
+ try:
+ self._stream.write(b"")
+ except Exception:
+ try:
+ self._stream.write(b"")
+ except Exception:
+ return False
+ return True
+
+ def seekable(self) -> bool:
+ x = getattr(self._stream, "seekable", None)
+ if x is not None:
+ return t.cast(bool, x())
+ try:
+ self._stream.seek(self._stream.tell())
+ except Exception:
+ return False
+ return True
+
+
+def _is_binary_reader(stream: t.IO[t.Any], default: bool = False) -> bool:
+ try:
+ return isinstance(stream.read(0), bytes)
+ except Exception:
+ return default
+ # This happens in some cases where the stream was already
+ # closed. In this case, we assume the default.
+
+
+def _is_binary_writer(stream: t.IO[t.Any], default: bool = False) -> bool:
+ try:
+ stream.write(b"")
+ except Exception:
+ try:
+ stream.write("")
+ return False
+ except Exception:
+ pass
+ return default
+ return True
+
+
+def _find_binary_reader(stream: t.IO[t.Any]) -> t.BinaryIO | None:
+ # We need to figure out if the given stream is already binary.
+ # This can happen because the official docs recommend detaching
+ # the streams to get binary streams. Some code might do this, so
+ # we need to deal with this case explicitly.
+ if _is_binary_reader(stream, False):
+ return t.cast(t.BinaryIO, stream)
+
+ buf = getattr(stream, "buffer", None)
+
+ # Same situation here; this time we assume that the buffer is
+ # actually binary in case it's closed.
+ if buf is not None and _is_binary_reader(buf, True):
+ return t.cast(t.BinaryIO, buf)
+
+ return None
+
+
+def _find_binary_writer(stream: t.IO[t.Any]) -> t.BinaryIO | None:
+ # We need to figure out if the given stream is already binary.
+ # This can happen because the official docs recommend detaching
+ # the streams to get binary streams. Some code might do this, so
+ # we need to deal with this case explicitly.
+ if _is_binary_writer(stream, False):
+ return t.cast(t.BinaryIO, stream)
+
+ buf = getattr(stream, "buffer", None)
+
+ # Same situation here; this time we assume that the buffer is
+ # actually binary in case it's closed.
+ if buf is not None and _is_binary_writer(buf, True):
+ return t.cast(t.BinaryIO, buf)
+
+ return None
+
+
+def _stream_is_misconfigured(stream: t.TextIO) -> bool:
+ """A stream is misconfigured if its encoding is ASCII."""
+ # If the stream does not have an encoding set, we assume it's set
+ # to ASCII. This appears to happen in certain unittest
+ # environments. It's not quite clear what the correct behavior is
+ # but this at least will force Click to recover somehow.
+ return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii")
+
+
+def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: str | None) -> bool:
+ """A stream attribute is compatible if it is equal to the
+ desired value or the desired value is unset and the attribute
+ has a value.
+ """
+ stream_value = getattr(stream, attr, None)
+ return stream_value == value or (value is None and stream_value is not None)
+
+
+def _is_compatible_text_stream(
+ stream: t.TextIO, encoding: str | None, errors: str | None
+) -> bool:
+ """Check if a stream's encoding and errors attributes are
+ compatible with the desired values.
+ """
+ return _is_compat_stream_attr(
+ stream, "encoding", encoding
+ ) and _is_compat_stream_attr(stream, "errors", errors)
+
+
+def _force_correct_text_stream(
+ text_stream: t.IO[t.Any],
+ encoding: str | None,
+ errors: str | None,
+ is_binary: t.Callable[[t.IO[t.Any], bool], bool],
+ find_binary: t.Callable[[t.IO[t.Any]], t.BinaryIO | None],
+ force_readable: bool = False,
+ force_writable: bool = False,
+) -> t.TextIO:
+ if is_binary(text_stream, False):
+ binary_reader = t.cast(t.BinaryIO, text_stream)
+ else:
+ text_stream = t.cast(t.TextIO, text_stream)
+ # If the stream looks compatible, and won't default to a
+ # misconfigured ascii encoding, return it as-is.
+ if _is_compatible_text_stream(text_stream, encoding, errors) and not (
+ encoding is None and _stream_is_misconfigured(text_stream)
+ ):
+ return text_stream
+
+ # Otherwise, get the underlying binary reader.
+ possible_binary_reader = find_binary(text_stream)
+
+ # If that's not possible, silently use the original reader
+ # and get mojibake instead of exceptions.
+ if possible_binary_reader is None:
+ return text_stream
+
+ binary_reader = possible_binary_reader
+
+ # Default errors to replace instead of strict in order to get
+ # something that works.
+ if errors is None:
+ errors = "replace"
+
+ # Wrap the binary stream in a text stream with the correct
+ # encoding parameters.
+ return _make_text_stream(
+ binary_reader,
+ encoding,
+ errors,
+ force_readable=force_readable,
+ force_writable=force_writable,
+ )
+
+
+def _force_correct_text_reader(
+ text_reader: t.IO[t.Any],
+ encoding: str | None,
+ errors: str | None,
+ force_readable: bool = False,
+) -> t.TextIO:
+ return _force_correct_text_stream(
+ text_reader,
+ encoding,
+ errors,
+ _is_binary_reader,
+ _find_binary_reader,
+ force_readable=force_readable,
+ )
+
+
+def _force_correct_text_writer(
+ text_writer: t.IO[t.Any],
+ encoding: str | None,
+ errors: str | None,
+ force_writable: bool = False,
+) -> t.TextIO:
+ return _force_correct_text_stream(
+ text_writer,
+ encoding,
+ errors,
+ _is_binary_writer,
+ _find_binary_writer,
+ force_writable=force_writable,
+ )
+
+
+def get_binary_stdin() -> t.BinaryIO:
+ reader = _find_binary_reader(sys.stdin)
+ if reader is None:
+ raise RuntimeError("Was not able to determine binary stream for sys.stdin.")
+ return reader
+
+
+def get_binary_stdout() -> t.BinaryIO:
+ writer = _find_binary_writer(sys.stdout)
+ if writer is None:
+ raise RuntimeError("Was not able to determine binary stream for sys.stdout.")
+ return writer
+
+
+def get_binary_stderr() -> t.BinaryIO:
+ writer = _find_binary_writer(sys.stderr)
+ if writer is None:
+ raise RuntimeError("Was not able to determine binary stream for sys.stderr.")
+ return writer
+
+
+def get_text_stdin(encoding: str | None = None, errors: str | None = None) -> t.TextIO:
+ rv = _get_windows_console_stream(sys.stdin, encoding, errors)
+ if rv is not None:
+ return rv
+ return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True)
+
+
+def get_text_stdout(encoding: str | None = None, errors: str | None = None) -> t.TextIO:
+ rv = _get_windows_console_stream(sys.stdout, encoding, errors)
+ if rv is not None:
+ return rv
+ return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True)
+
+
+def get_text_stderr(encoding: str | None = None, errors: str | None = None) -> t.TextIO:
+ rv = _get_windows_console_stream(sys.stderr, encoding, errors)
+ if rv is not None:
+ return rv
+ return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True)
+
+
+def _wrap_io_open(
+ file: str | os.PathLike[str] | int,
+ mode: str,
+ encoding: str | None,
+ errors: str | None,
+) -> t.IO[t.Any]:
+ """Handles not passing ``encoding`` and ``errors`` in binary mode."""
+ if "b" in mode:
+ return open(file, mode)
+
+ return open(file, mode, encoding=encoding, errors=errors)
+
+
+def open_stream(
+ filename: str | os.PathLike[str],
+ mode: str = "r",
+ encoding: str | None = None,
+ errors: str | None = "strict",
+ atomic: bool = False,
+) -> tuple[t.IO[t.Any], bool]:
+ binary = "b" in mode
+ filename = os.fspath(filename)
+
+ # Standard streams first. These are simple because they ignore the
+ # atomic flag. Use fsdecode to handle Path("-").
+ if os.fsdecode(filename) == "-":
+ if any(m in mode for m in ["w", "a", "x"]):
+ if binary:
+ return get_binary_stdout(), False
+ return get_text_stdout(encoding=encoding, errors=errors), False
+ if binary:
+ return get_binary_stdin(), False
+ return get_text_stdin(encoding=encoding, errors=errors), False
+
+ # Non-atomic writes directly go out through the regular open functions.
+ if not atomic:
+ return _wrap_io_open(filename, mode, encoding, errors), True
+
+ # Some usability stuff for atomic writes
+ if "a" in mode:
+ raise ValueError(
+ "Appending to an existing file is not supported, because that"
+ " would involve an expensive `copy`-operation to a temporary"
+ " file. Open the file in normal `w`-mode and copy explicitly"
+ " if that's what you're after."
+ )
+ if "x" in mode:
+ raise ValueError("Use the `overwrite`-parameter instead.")
+ if "w" not in mode:
+ raise ValueError("Atomic writes only make sense with `w`-mode.")
+
+ # Atomic writes are more complicated. They work by opening a file
+ # as a proxy in the same folder and then using the fdopen
+ # functionality to wrap it in a Python file. Then we wrap it in an
+ # atomic file that moves the file over on close.
+ import errno
+ import random
+
+ try:
+ perm: int | None = os.stat(filename).st_mode
+ except OSError:
+ perm = None
+
+ flags = os.O_RDWR | os.O_CREAT | os.O_EXCL
+
+ if binary:
+ flags |= getattr(os, "O_BINARY", 0)
+
+ while True:
+ tmp_filename = os.path.join(
+ os.path.dirname(filename),
+ f".__atomic-write{random.randrange(1 << 32):08x}",
+ )
+ try:
+ fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm)
+ break
+ except OSError as e:
+ if e.errno == errno.EEXIST or (
+ os.name == "nt"
+ and e.errno == errno.EACCES
+ and os.path.isdir(e.filename)
+ and os.access(e.filename, os.W_OK)
+ ):
+ continue
+ raise
+
+ if perm is not None:
+ os.chmod(tmp_filename, perm) # in case perm includes bits in umask
+
+ f = _wrap_io_open(fd, mode, encoding, errors)
+ af = _AtomicFile(f, tmp_filename, os.path.realpath(filename))
+ return t.cast(t.IO[t.Any], af), True
+
+
+class _AtomicFile:
+ def __init__(self, f: t.IO[t.Any], tmp_filename: str, real_filename: str) -> None:
+ self._f = f
+ self._tmp_filename = tmp_filename
+ self._real_filename = real_filename
+ self.closed = False
+
+ @property
+ def name(self) -> str:
+ return self._real_filename
+
+ def close(self, delete: bool = False) -> None:
+ if self.closed:
+ return
+ self._f.close()
+ os.replace(self._tmp_filename, self._real_filename)
+ self.closed = True
+
+ def __getattr__(self, name: str) -> t.Any:
+ return getattr(self._f, name)
+
+ def __enter__(self) -> _AtomicFile:
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.close(delete=exc_type is not None)
+
+ def __repr__(self) -> str:
+ return repr(self._f)
+
+
+def strip_ansi(value: str) -> str:
+ return _ansi_re.sub("", value)
+
+
+def _is_jupyter_kernel_output(stream: t.IO[t.Any]) -> bool:
+ while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)):
+ stream = stream._stream
+
+ return stream.__class__.__module__.startswith("ipykernel.")
+
+
+def should_strip_ansi(
+ stream: t.IO[t.Any] | None = None, color: bool | None = None
+) -> bool:
+ if color is None:
+ if stream is None:
+ stream = sys.stdin
+ return not isatty(stream) and not _is_jupyter_kernel_output(stream)
+ return not color
+
+
+# On Windows, wrap the output streams with colorama to support ANSI
+# color codes.
+# NOTE: double check is needed so mypy does not analyze this on Linux
+if sys.platform.startswith("win") and WIN:
+ from ._winconsole import _get_windows_console_stream
+
+ def _get_argv_encoding() -> str:
+ import locale
+
+ return locale.getpreferredencoding()
+
+ _ansi_stream_wrappers: cabc.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
+
+ def auto_wrap_for_ansi(stream: t.TextIO, color: bool | None = None) -> t.TextIO:
+ """Support ANSI color and style codes on Windows by wrapping a
+ stream with colorama.
+ """
+ try:
+ cached = _ansi_stream_wrappers.get(stream)
+ except Exception:
+ cached = None
+
+ if cached is not None:
+ return cached
+
+ import colorama
+
+ strip = should_strip_ansi(stream, color)
+ ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip)
+ rv = t.cast(t.TextIO, ansi_wrapper.stream)
+ _write = rv.write
+
+ def _safe_write(s: str) -> int:
+ try:
+ return _write(s)
+ except BaseException:
+ ansi_wrapper.reset_all()
+ raise
+
+ rv.write = _safe_write # type: ignore[method-assign]
+
+ try:
+ _ansi_stream_wrappers[stream] = rv
+ except Exception:
+ pass
+
+ return rv
+
+else:
+
+ def _get_argv_encoding() -> str:
+ return getattr(sys.stdin, "encoding", None) or sys.getfilesystemencoding()
+
+ def _get_windows_console_stream(
+ f: t.TextIO, encoding: str | None, errors: str | None
+ ) -> t.TextIO | None:
+ return None
+
+
+def term_len(x: str) -> int:
+ return len(strip_ansi(x))
+
+
+def isatty(stream: t.IO[t.Any]) -> bool:
+ try:
+ return stream.isatty()
+ except Exception:
+ return False
+
+
+def _make_cached_stream_func(
+ src_func: t.Callable[[], t.TextIO | None],
+ wrapper_func: t.Callable[[], t.TextIO],
+) -> t.Callable[[], t.TextIO | None]:
+ cache: cabc.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
+
+ def func() -> t.TextIO | None:
+ stream = src_func()
+
+ if stream is None:
+ return None
+
+ try:
+ rv = cache.get(stream)
+ except Exception:
+ rv = None
+ if rv is not None:
+ return rv
+ rv = wrapper_func()
+ try:
+ cache[stream] = rv
+ except Exception:
+ pass
+ return rv
+
+ return func
+
+
+_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin)
+_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout)
+_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr)
+
+
+binary_streams: cabc.Mapping[str, t.Callable[[], t.BinaryIO]] = {
+ "stdin": get_binary_stdin,
+ "stdout": get_binary_stdout,
+ "stderr": get_binary_stderr,
+}
+
+text_streams: cabc.Mapping[str, t.Callable[[str | None, str | None], t.TextIO]] = {
+ "stdin": get_text_stdin,
+ "stdout": get_text_stdout,
+ "stderr": get_text_stderr,
+}
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/_termui_impl.py b/stripe_config/.venv/lib/python3.12/site-packages/click/_termui_impl.py
new file mode 100644
index 0000000..47f87b8
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/_termui_impl.py
@@ -0,0 +1,847 @@
+"""
+This module contains implementations for the termui module. To keep the
+import time of Click down, some infrequently used functionality is
+placed in this module and only imported as needed.
+"""
+
+from __future__ import annotations
+
+import collections.abc as cabc
+import contextlib
+import math
+import os
+import shlex
+import sys
+import time
+import typing as t
+from gettext import gettext as _
+from io import StringIO
+from pathlib import Path
+from types import TracebackType
+
+from ._compat import _default_text_stdout
+from ._compat import CYGWIN
+from ._compat import get_best_encoding
+from ._compat import isatty
+from ._compat import open_stream
+from ._compat import strip_ansi
+from ._compat import term_len
+from ._compat import WIN
+from .exceptions import ClickException
+from .utils import echo
+
+V = t.TypeVar("V")
+
+if os.name == "nt":
+ BEFORE_BAR = "\r"
+ AFTER_BAR = "\n"
+else:
+ BEFORE_BAR = "\r\033[?25l"
+ AFTER_BAR = "\033[?25h\n"
+
+
+class ProgressBar(t.Generic[V]):
+ def __init__(
+ self,
+ iterable: cabc.Iterable[V] | None,
+ length: int | None = None,
+ fill_char: str = "#",
+ empty_char: str = " ",
+ bar_template: str = "%(bar)s",
+ info_sep: str = " ",
+ hidden: bool = False,
+ show_eta: bool = True,
+ show_percent: bool | None = None,
+ show_pos: bool = False,
+ item_show_func: t.Callable[[V | None], str | None] | None = None,
+ label: str | None = None,
+ file: t.TextIO | None = None,
+ color: bool | None = None,
+ update_min_steps: int = 1,
+ width: int = 30,
+ ) -> None:
+ self.fill_char = fill_char
+ self.empty_char = empty_char
+ self.bar_template = bar_template
+ self.info_sep = info_sep
+ self.hidden = hidden
+ self.show_eta = show_eta
+ self.show_percent = show_percent
+ self.show_pos = show_pos
+ self.item_show_func = item_show_func
+ self.label: str = label or ""
+
+ if file is None:
+ file = _default_text_stdout()
+
+ # There are no standard streams attached to write to. For example,
+ # pythonw on Windows.
+ if file is None:
+ file = StringIO()
+
+ self.file = file
+ self.color = color
+ self.update_min_steps = update_min_steps
+ self._completed_intervals = 0
+ self.width: int = width
+ self.autowidth: bool = width == 0
+
+ if length is None:
+ from operator import length_hint
+
+ length = length_hint(iterable, -1)
+
+ if length == -1:
+ length = None
+ if iterable is None:
+ if length is None:
+ raise TypeError("iterable or length is required")
+ iterable = t.cast("cabc.Iterable[V]", range(length))
+ self.iter: cabc.Iterable[V] = iter(iterable)
+ self.length = length
+ self.pos: int = 0
+ self.avg: list[float] = []
+ self.last_eta: float
+ self.start: float
+ self.start = self.last_eta = time.time()
+ self.eta_known: bool = False
+ self.finished: bool = False
+ self.max_width: int | None = None
+ self.entered: bool = False
+ self.current_item: V | None = None
+ self._is_atty = isatty(self.file)
+ self._last_line: str | None = None
+
+ def __enter__(self) -> ProgressBar[V]:
+ self.entered = True
+ self.render_progress()
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.render_finish()
+
+ def __iter__(self) -> cabc.Iterator[V]:
+ if not self.entered:
+ raise RuntimeError("You need to use progress bars in a with block.")
+ self.render_progress()
+ return self.generator()
+
+ def __next__(self) -> V:
+ # Iteration is defined in terms of a generator function,
+ # returned by iter(self); use that to define next(). This works
+ # because `self.iter` is an iterable consumed by that generator,
+ # so it is re-entry safe. Calling `next(self.generator())`
+ # twice works and does "what you want".
+ return next(iter(self))
+
+ def render_finish(self) -> None:
+ if self.hidden or not self._is_atty:
+ return
+ self.file.write(AFTER_BAR)
+ self.file.flush()
+
+ @property
+ def pct(self) -> float:
+ if self.finished:
+ return 1.0
+ return min(self.pos / (float(self.length or 1) or 1), 1.0)
+
+ @property
+ def time_per_iteration(self) -> float:
+ if not self.avg:
+ return 0.0
+ return sum(self.avg) / float(len(self.avg))
+
+ @property
+ def eta(self) -> float:
+ if self.length is not None and not self.finished:
+ return self.time_per_iteration * (self.length - self.pos)
+ return 0.0
+
+ def format_eta(self) -> str:
+ if self.eta_known:
+ t = int(self.eta)
+ seconds = t % 60
+ t //= 60
+ minutes = t % 60
+ t //= 60
+ hours = t % 24
+ t //= 24
+ if t > 0:
+ return f"{t}d {hours:02}:{minutes:02}:{seconds:02}"
+ else:
+ return f"{hours:02}:{minutes:02}:{seconds:02}"
+ return ""
+
+ def format_pos(self) -> str:
+ pos = str(self.pos)
+ if self.length is not None:
+ pos += f"/{self.length}"
+ return pos
+
+ def format_pct(self) -> str:
+ return f"{int(self.pct * 100): 4}%"[1:]
+
+ def format_bar(self) -> str:
+ if self.length is not None:
+ bar_length = int(self.pct * self.width)
+ bar = self.fill_char * bar_length
+ bar += self.empty_char * (self.width - bar_length)
+ elif self.finished:
+ bar = self.fill_char * self.width
+ else:
+ chars = list(self.empty_char * (self.width or 1))
+ if self.time_per_iteration != 0:
+ chars[
+ int(
+ (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5)
+ * self.width
+ )
+ ] = self.fill_char
+ bar = "".join(chars)
+ return bar
+
+ def format_progress_line(self) -> str:
+ show_percent = self.show_percent
+
+ info_bits = []
+ if self.length is not None and show_percent is None:
+ show_percent = not self.show_pos
+
+ if self.show_pos:
+ info_bits.append(self.format_pos())
+ if show_percent:
+ info_bits.append(self.format_pct())
+ if self.show_eta and self.eta_known and not self.finished:
+ info_bits.append(self.format_eta())
+ if self.item_show_func is not None:
+ item_info = self.item_show_func(self.current_item)
+ if item_info is not None:
+ info_bits.append(item_info)
+
+ return (
+ self.bar_template
+ % {
+ "label": self.label,
+ "bar": self.format_bar(),
+ "info": self.info_sep.join(info_bits),
+ }
+ ).rstrip()
+
+ def render_progress(self) -> None:
+ if self.hidden:
+ return
+
+ if not self._is_atty:
+ # Only output the label once if the output is not a TTY.
+ if self._last_line != self.label:
+ self._last_line = self.label
+ echo(self.label, file=self.file, color=self.color)
+ return
+
+ buf = []
+ # Update width in case the terminal has been resized
+ if self.autowidth:
+ import shutil
+
+ old_width = self.width
+ self.width = 0
+ clutter_length = term_len(self.format_progress_line())
+ new_width = max(0, shutil.get_terminal_size().columns - clutter_length)
+ if new_width < old_width and self.max_width is not None:
+ buf.append(BEFORE_BAR)
+ buf.append(" " * self.max_width)
+ self.max_width = new_width
+ self.width = new_width
+
+ clear_width = self.width
+ if self.max_width is not None:
+ clear_width = self.max_width
+
+ buf.append(BEFORE_BAR)
+ line = self.format_progress_line()
+ line_len = term_len(line)
+ if self.max_width is None or self.max_width < line_len:
+ self.max_width = line_len
+
+ buf.append(line)
+ buf.append(" " * (clear_width - line_len))
+ line = "".join(buf)
+ # Render the line only if it changed.
+
+ if line != self._last_line:
+ self._last_line = line
+ echo(line, file=self.file, color=self.color, nl=False)
+ self.file.flush()
+
+ def make_step(self, n_steps: int) -> None:
+ self.pos += n_steps
+ if self.length is not None and self.pos >= self.length:
+ self.finished = True
+
+ if (time.time() - self.last_eta) < 1.0:
+ return
+
+ self.last_eta = time.time()
+
+ # self.avg is a rolling list of length <= 7 of steps where steps are
+ # defined as time elapsed divided by the total progress through
+ # self.length.
+ if self.pos:
+ step = (time.time() - self.start) / self.pos
+ else:
+ step = time.time() - self.start
+
+ self.avg = self.avg[-6:] + [step]
+
+ self.eta_known = self.length is not None
+
+ def update(self, n_steps: int, current_item: V | None = None) -> None:
+ """Update the progress bar by advancing a specified number of
+ steps, and optionally set the ``current_item`` for this new
+ position.
+
+ :param n_steps: Number of steps to advance.
+ :param current_item: Optional item to set as ``current_item``
+ for the updated position.
+
+ .. versionchanged:: 8.0
+ Added the ``current_item`` optional parameter.
+
+ .. versionchanged:: 8.0
+ Only render when the number of steps meets the
+ ``update_min_steps`` threshold.
+ """
+ if current_item is not None:
+ self.current_item = current_item
+
+ self._completed_intervals += n_steps
+
+ if self._completed_intervals >= self.update_min_steps:
+ self.make_step(self._completed_intervals)
+ self.render_progress()
+ self._completed_intervals = 0
+
+ def finish(self) -> None:
+ self.eta_known = False
+ self.current_item = None
+ self.finished = True
+
+ def generator(self) -> cabc.Iterator[V]:
+ """Return a generator which yields the items added to the bar
+ during construction, and updates the progress bar *after* the
+ yielded block returns.
+ """
+ # WARNING: the iterator interface for `ProgressBar` relies on
+ # this and only works because this is a simple generator which
+ # doesn't create or manage additional state. If this function
+ # changes, the impact should be evaluated both against
+ # `iter(bar)` and `next(bar)`. `next()` in particular may call
+ # `self.generator()` repeatedly, and this must remain safe in
+ # order for that interface to work.
+ if not self.entered:
+ raise RuntimeError("You need to use progress bars in a with block.")
+
+ if not self._is_atty:
+ yield from self.iter
+ else:
+ for rv in self.iter:
+ self.current_item = rv
+
+ # This allows show_item_func to be updated before the
+ # item is processed. Only trigger at the beginning of
+ # the update interval.
+ if self._completed_intervals == 0:
+ self.render_progress()
+
+ yield rv
+ self.update(1)
+
+ self.finish()
+ self.render_progress()
+
+
+def pager(generator: cabc.Iterable[str], color: bool | None = None) -> None:
+ """Decide what method to use for paging through text."""
+ stdout = _default_text_stdout()
+
+ # There are no standard streams attached to write to. For example,
+ # pythonw on Windows.
+ if stdout is None:
+ stdout = StringIO()
+
+ if not isatty(sys.stdin) or not isatty(stdout):
+ return _nullpager(stdout, generator, color)
+
+ # Split and normalize the pager command into parts.
+ pager_cmd_parts = shlex.split(os.environ.get("PAGER", ""), posix=False)
+ if pager_cmd_parts:
+ if WIN:
+ if _tempfilepager(generator, pager_cmd_parts, color):
+ return
+ elif _pipepager(generator, pager_cmd_parts, color):
+ return
+
+ if os.environ.get("TERM") in ("dumb", "emacs"):
+ return _nullpager(stdout, generator, color)
+ if (WIN or sys.platform.startswith("os2")) and _tempfilepager(
+ generator, ["more"], color
+ ):
+ return
+ if _pipepager(generator, ["less"], color):
+ return
+
+ import tempfile
+
+ fd, filename = tempfile.mkstemp()
+ os.close(fd)
+ try:
+ if _pipepager(generator, ["more"], color):
+ return
+ return _nullpager(stdout, generator, color)
+ finally:
+ os.unlink(filename)
+
+
+def _pipepager(
+ generator: cabc.Iterable[str], cmd_parts: list[str], color: bool | None
+) -> bool:
+ """Page through text by feeding it to another program. Invoking a
+ pager through this might support colors.
+
+ Returns `True` if the command was found, `False` otherwise and thus another
+ pager should be attempted.
+ """
+ # Split the command into the invoked CLI and its parameters.
+ if not cmd_parts:
+ return False
+
+ import shutil
+
+ cmd = cmd_parts[0]
+ cmd_params = cmd_parts[1:]
+
+ cmd_filepath = shutil.which(cmd)
+ if not cmd_filepath:
+ return False
+ # Resolves symlinks and produces a normalized absolute path string.
+ cmd_path = Path(cmd_filepath).resolve()
+ cmd_name = cmd_path.name
+
+ import subprocess
+
+ # Make a local copy of the environment to not affect the global one.
+ env = dict(os.environ)
+
+ # If we're piping to less and the user hasn't decided on colors, we enable
+ # them by default we find the -R flag in the command line arguments.
+ if color is None and cmd_name == "less":
+ less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_params)}"
+ if not less_flags:
+ env["LESS"] = "-R"
+ color = True
+ elif "r" in less_flags or "R" in less_flags:
+ color = True
+
+ c = subprocess.Popen(
+ [str(cmd_path)] + cmd_params,
+ shell=True,
+ stdin=subprocess.PIPE,
+ env=env,
+ errors="replace",
+ text=True,
+ )
+ assert c.stdin is not None
+ try:
+ for text in generator:
+ if not color:
+ text = strip_ansi(text)
+
+ c.stdin.write(text)
+ except BrokenPipeError:
+ # In case the pager exited unexpectedly, ignore the broken pipe error.
+ pass
+ except Exception as e:
+ # In case there is an exception we want to close the pager immediately
+ # and let the caller handle it.
+ # Otherwise the pager will keep running, and the user may not notice
+ # the error message, or worse yet it may leave the terminal in a broken state.
+ c.terminate()
+ raise e
+ finally:
+ # We must close stdin and wait for the pager to exit before we continue
+ try:
+ c.stdin.close()
+ # Close implies flush, so it might throw a BrokenPipeError if the pager
+ # process exited already.
+ except BrokenPipeError:
+ pass
+
+ # Less doesn't respect ^C, but catches it for its own UI purposes (aborting
+ # search or other commands inside less).
+ #
+ # That means when the user hits ^C, the parent process (click) terminates,
+ # but less is still alive, paging the output and messing up the terminal.
+ #
+ # If the user wants to make the pager exit on ^C, they should set
+ # `LESS='-K'`. It's not our decision to make.
+ while True:
+ try:
+ c.wait()
+ except KeyboardInterrupt:
+ pass
+ else:
+ break
+
+ return True
+
+
+def _tempfilepager(
+ generator: cabc.Iterable[str], cmd_parts: list[str], color: bool | None
+) -> bool:
+ """Page through text by invoking a program on a temporary file.
+
+ Returns `True` if the command was found, `False` otherwise and thus another
+ pager should be attempted.
+ """
+ # Split the command into the invoked CLI and its parameters.
+ if not cmd_parts:
+ return False
+
+ import shutil
+
+ cmd = cmd_parts[0]
+
+ cmd_filepath = shutil.which(cmd)
+ if not cmd_filepath:
+ return False
+ # Resolves symlinks and produces a normalized absolute path string.
+ cmd_path = Path(cmd_filepath).resolve()
+
+ import subprocess
+ import tempfile
+
+ fd, filename = tempfile.mkstemp()
+ # TODO: This never terminates if the passed generator never terminates.
+ text = "".join(generator)
+ if not color:
+ text = strip_ansi(text)
+ encoding = get_best_encoding(sys.stdout)
+ with open_stream(filename, "wb")[0] as f:
+ f.write(text.encode(encoding))
+ try:
+ subprocess.call([str(cmd_path), filename])
+ except OSError:
+ # Command not found
+ pass
+ finally:
+ os.close(fd)
+ os.unlink(filename)
+
+ return True
+
+
+def _nullpager(
+ stream: t.TextIO, generator: cabc.Iterable[str], color: bool | None
+) -> None:
+ """Simply print unformatted text. This is the ultimate fallback."""
+ for text in generator:
+ if not color:
+ text = strip_ansi(text)
+ stream.write(text)
+
+
+class Editor:
+ def __init__(
+ self,
+ editor: str | None = None,
+ env: cabc.Mapping[str, str] | None = None,
+ require_save: bool = True,
+ extension: str = ".txt",
+ ) -> None:
+ self.editor = editor
+ self.env = env
+ self.require_save = require_save
+ self.extension = extension
+
+ def get_editor(self) -> str:
+ if self.editor is not None:
+ return self.editor
+ for key in "VISUAL", "EDITOR":
+ rv = os.environ.get(key)
+ if rv:
+ return rv
+ if WIN:
+ return "notepad"
+
+ from shutil import which
+
+ for editor in "sensible-editor", "vim", "nano":
+ if which(editor) is not None:
+ return editor
+ return "vi"
+
+ def edit_files(self, filenames: cabc.Iterable[str]) -> None:
+ import subprocess
+
+ editor = self.get_editor()
+ environ: dict[str, str] | None = None
+
+ if self.env:
+ environ = os.environ.copy()
+ environ.update(self.env)
+
+ exc_filename = " ".join(f'"{filename}"' for filename in filenames)
+
+ try:
+ c = subprocess.Popen(
+ args=f"{editor} {exc_filename}", env=environ, shell=True
+ )
+ exit_code = c.wait()
+ if exit_code != 0:
+ raise ClickException(
+ _("{editor}: Editing failed").format(editor=editor)
+ )
+ except OSError as e:
+ raise ClickException(
+ _("{editor}: Editing failed: {e}").format(editor=editor, e=e)
+ ) from e
+
+ @t.overload
+ def edit(self, text: bytes | bytearray) -> bytes | None: ...
+
+ # We cannot know whether or not the type expected is str or bytes when None
+ # is passed, so str is returned as that was what was done before.
+ @t.overload
+ def edit(self, text: str | None) -> str | None: ...
+
+ def edit(self, text: str | bytes | bytearray | None) -> str | bytes | None:
+ import tempfile
+
+ if text is None:
+ data: bytes | bytearray = b""
+ elif isinstance(text, (bytes, bytearray)):
+ data = text
+ else:
+ if text and not text.endswith("\n"):
+ text += "\n"
+
+ if WIN:
+ data = text.replace("\n", "\r\n").encode("utf-8-sig")
+ else:
+ data = text.encode("utf-8")
+
+ fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension)
+ f: t.BinaryIO
+
+ try:
+ with os.fdopen(fd, "wb") as f:
+ f.write(data)
+
+ # If the filesystem resolution is 1 second, like Mac OS
+ # 10.12 Extended, or 2 seconds, like FAT32, and the editor
+ # closes very fast, require_save can fail. Set the modified
+ # time to be 2 seconds in the past to work around this.
+ os.utime(name, (os.path.getatime(name), os.path.getmtime(name) - 2))
+ # Depending on the resolution, the exact value might not be
+ # recorded, so get the new recorded value.
+ timestamp = os.path.getmtime(name)
+
+ self.edit_files((name,))
+
+ if self.require_save and os.path.getmtime(name) == timestamp:
+ return None
+
+ with open(name, "rb") as f:
+ rv = f.read()
+
+ if isinstance(text, (bytes, bytearray)):
+ return rv
+
+ return rv.decode("utf-8-sig").replace("\r\n", "\n")
+ finally:
+ os.unlink(name)
+
+
+def open_url(url: str, wait: bool = False, locate: bool = False) -> int:
+ import subprocess
+
+ def _unquote_file(url: str) -> str:
+ from urllib.parse import unquote
+
+ if url.startswith("file://"):
+ url = unquote(url[7:])
+
+ return url
+
+ if sys.platform == "darwin":
+ args = ["open"]
+ if wait:
+ args.append("-W")
+ if locate:
+ args.append("-R")
+ args.append(_unquote_file(url))
+ null = open("/dev/null", "w")
+ try:
+ return subprocess.Popen(args, stderr=null).wait()
+ finally:
+ null.close()
+ elif WIN:
+ if locate:
+ url = _unquote_file(url)
+ args = ["explorer", f"/select,{url}"]
+ else:
+ args = ["start"]
+ if wait:
+ args.append("/WAIT")
+ args.append("")
+ args.append(url)
+ try:
+ return subprocess.call(args)
+ except OSError:
+ # Command not found
+ return 127
+ elif CYGWIN:
+ if locate:
+ url = _unquote_file(url)
+ args = ["cygstart", os.path.dirname(url)]
+ else:
+ args = ["cygstart"]
+ if wait:
+ args.append("-w")
+ args.append(url)
+ try:
+ return subprocess.call(args)
+ except OSError:
+ # Command not found
+ return 127
+
+ try:
+ if locate:
+ url = os.path.dirname(_unquote_file(url)) or "."
+ else:
+ url = _unquote_file(url)
+ c = subprocess.Popen(["xdg-open", url])
+ if wait:
+ return c.wait()
+ return 0
+ except OSError:
+ if url.startswith(("http://", "https://")) and not locate and not wait:
+ import webbrowser
+
+ webbrowser.open(url)
+ return 0
+ return 1
+
+
+def _translate_ch_to_exc(ch: str) -> None:
+ if ch == "\x03":
+ raise KeyboardInterrupt()
+
+ if ch == "\x04" and not WIN: # Unix-like, Ctrl+D
+ raise EOFError()
+
+ if ch == "\x1a" and WIN: # Windows, Ctrl+Z
+ raise EOFError()
+
+ return None
+
+
+if sys.platform == "win32":
+ import msvcrt
+
+ @contextlib.contextmanager
+ def raw_terminal() -> cabc.Iterator[int]:
+ yield -1
+
+ def getchar(echo: bool) -> str:
+ # The function `getch` will return a bytes object corresponding to
+ # the pressed character. Since Windows 10 build 1803, it will also
+ # return \x00 when called a second time after pressing a regular key.
+ #
+ # `getwch` does not share this probably-bugged behavior. Moreover, it
+ # returns a Unicode object by default, which is what we want.
+ #
+ # Either of these functions will return \x00 or \xe0 to indicate
+ # a special key, and you need to call the same function again to get
+ # the "rest" of the code. The fun part is that \u00e0 is
+ # "latin small letter a with grave", so if you type that on a French
+ # keyboard, you _also_ get a \xe0.
+ # E.g., consider the Up arrow. This returns \xe0 and then \x48. The
+ # resulting Unicode string reads as "a with grave" + "capital H".
+ # This is indistinguishable from when the user actually types
+ # "a with grave" and then "capital H".
+ #
+ # When \xe0 is returned, we assume it's part of a special-key sequence
+ # and call `getwch` again, but that means that when the user types
+ # the \u00e0 character, `getchar` doesn't return until a second
+ # character is typed.
+ # The alternative is returning immediately, but that would mess up
+ # cross-platform handling of arrow keys and others that start with
+ # \xe0. Another option is using `getch`, but then we can't reliably
+ # read non-ASCII characters, because return values of `getch` are
+ # limited to the current 8-bit codepage.
+ #
+ # Anyway, Click doesn't claim to do this Right(tm), and using `getwch`
+ # is doing the right thing in more situations than with `getch`.
+
+ if echo:
+ func = t.cast(t.Callable[[], str], msvcrt.getwche)
+ else:
+ func = t.cast(t.Callable[[], str], msvcrt.getwch)
+
+ rv = func()
+
+ if rv in ("\x00", "\xe0"):
+ # \x00 and \xe0 are control characters that indicate special key,
+ # see above.
+ rv += func()
+
+ _translate_ch_to_exc(rv)
+ return rv
+
+else:
+ import termios
+ import tty
+
+ @contextlib.contextmanager
+ def raw_terminal() -> cabc.Iterator[int]:
+ f: t.TextIO | None
+ fd: int
+
+ if not isatty(sys.stdin):
+ f = open("/dev/tty")
+ fd = f.fileno()
+ else:
+ fd = sys.stdin.fileno()
+ f = None
+
+ try:
+ old_settings = termios.tcgetattr(fd)
+
+ try:
+ tty.setraw(fd)
+ yield fd
+ finally:
+ termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
+ sys.stdout.flush()
+
+ if f is not None:
+ f.close()
+ except termios.error:
+ pass
+
+ def getchar(echo: bool) -> str:
+ with raw_terminal() as fd:
+ ch = os.read(fd, 32).decode(get_best_encoding(sys.stdin), "replace")
+
+ if echo and isatty(sys.stdout):
+ sys.stdout.write(ch)
+
+ _translate_ch_to_exc(ch)
+ return ch
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/_textwrap.py b/stripe_config/.venv/lib/python3.12/site-packages/click/_textwrap.py
new file mode 100644
index 0000000..97fbee3
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/_textwrap.py
@@ -0,0 +1,51 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import textwrap
+from contextlib import contextmanager
+
+
+class TextWrapper(textwrap.TextWrapper):
+ def _handle_long_word(
+ self,
+ reversed_chunks: list[str],
+ cur_line: list[str],
+ cur_len: int,
+ width: int,
+ ) -> None:
+ space_left = max(width - cur_len, 1)
+
+ if self.break_long_words:
+ last = reversed_chunks[-1]
+ cut = last[:space_left]
+ res = last[space_left:]
+ cur_line.append(cut)
+ reversed_chunks[-1] = res
+ elif not cur_line:
+ cur_line.append(reversed_chunks.pop())
+
+ @contextmanager
+ def extra_indent(self, indent: str) -> cabc.Iterator[None]:
+ old_initial_indent = self.initial_indent
+ old_subsequent_indent = self.subsequent_indent
+ self.initial_indent += indent
+ self.subsequent_indent += indent
+
+ try:
+ yield
+ finally:
+ self.initial_indent = old_initial_indent
+ self.subsequent_indent = old_subsequent_indent
+
+ def indent_only(self, text: str) -> str:
+ rv = []
+
+ for idx, line in enumerate(text.splitlines()):
+ indent = self.initial_indent
+
+ if idx > 0:
+ indent = self.subsequent_indent
+
+ rv.append(f"{indent}{line}")
+
+ return "\n".join(rv)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/_utils.py b/stripe_config/.venv/lib/python3.12/site-packages/click/_utils.py
new file mode 100644
index 0000000..09fb008
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/_utils.py
@@ -0,0 +1,36 @@
+from __future__ import annotations
+
+import enum
+import typing as t
+
+
+class Sentinel(enum.Enum):
+ """Enum used to define sentinel values.
+
+ .. seealso::
+
+ `PEP 661 - Sentinel Values `_.
+ """
+
+ UNSET = object()
+ FLAG_NEEDS_VALUE = object()
+
+ def __repr__(self) -> str:
+ return f"{self.__class__.__name__}.{self.name}"
+
+
+UNSET = Sentinel.UNSET
+"""Sentinel used to indicate that a value is not set."""
+
+FLAG_NEEDS_VALUE = Sentinel.FLAG_NEEDS_VALUE
+"""Sentinel used to indicate an option was passed as a flag without a
+value but is not a flag option.
+
+``Option.consume_value`` uses this to prompt or use the ``flag_value``.
+"""
+
+T_UNSET = t.Literal[UNSET] # type: ignore[valid-type]
+"""Type hint for the :data:`UNSET` sentinel value."""
+
+T_FLAG_NEEDS_VALUE = t.Literal[FLAG_NEEDS_VALUE] # type: ignore[valid-type]
+"""Type hint for the :data:`FLAG_NEEDS_VALUE` sentinel value."""
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/_winconsole.py b/stripe_config/.venv/lib/python3.12/site-packages/click/_winconsole.py
new file mode 100644
index 0000000..e56c7c6
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/_winconsole.py
@@ -0,0 +1,296 @@
+# This module is based on the excellent work by Adam Bartoš who
+# provided a lot of what went into the implementation here in
+# the discussion to issue1602 in the Python bug tracker.
+#
+# There are some general differences in regards to how this works
+# compared to the original patches as we do not need to patch
+# the entire interpreter but just work in our little world of
+# echo and prompt.
+from __future__ import annotations
+
+import collections.abc as cabc
+import io
+import sys
+import time
+import typing as t
+from ctypes import Array
+from ctypes import byref
+from ctypes import c_char
+from ctypes import c_char_p
+from ctypes import c_int
+from ctypes import c_ssize_t
+from ctypes import c_ulong
+from ctypes import c_void_p
+from ctypes import POINTER
+from ctypes import py_object
+from ctypes import Structure
+from ctypes.wintypes import DWORD
+from ctypes.wintypes import HANDLE
+from ctypes.wintypes import LPCWSTR
+from ctypes.wintypes import LPWSTR
+
+from ._compat import _NonClosingTextIOWrapper
+
+assert sys.platform == "win32"
+import msvcrt # noqa: E402
+from ctypes import windll # noqa: E402
+from ctypes import WINFUNCTYPE # noqa: E402
+
+c_ssize_p = POINTER(c_ssize_t)
+
+kernel32 = windll.kernel32
+GetStdHandle = kernel32.GetStdHandle
+ReadConsoleW = kernel32.ReadConsoleW
+WriteConsoleW = kernel32.WriteConsoleW
+GetConsoleMode = kernel32.GetConsoleMode
+GetLastError = kernel32.GetLastError
+GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32))
+CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(
+ ("CommandLineToArgvW", windll.shell32)
+)
+LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32))
+
+STDIN_HANDLE = GetStdHandle(-10)
+STDOUT_HANDLE = GetStdHandle(-11)
+STDERR_HANDLE = GetStdHandle(-12)
+
+PyBUF_SIMPLE = 0
+PyBUF_WRITABLE = 1
+
+ERROR_SUCCESS = 0
+ERROR_NOT_ENOUGH_MEMORY = 8
+ERROR_OPERATION_ABORTED = 995
+
+STDIN_FILENO = 0
+STDOUT_FILENO = 1
+STDERR_FILENO = 2
+
+EOF = b"\x1a"
+MAX_BYTES_WRITTEN = 32767
+
+if t.TYPE_CHECKING:
+ try:
+ # Using `typing_extensions.Buffer` instead of `collections.abc`
+ # on Windows for some reason does not have `Sized` implemented.
+ from collections.abc import Buffer # type: ignore
+ except ImportError:
+ from typing_extensions import Buffer
+
+try:
+ from ctypes import pythonapi
+except ImportError:
+ # On PyPy we cannot get buffers so our ability to operate here is
+ # severely limited.
+ get_buffer = None
+else:
+
+ class Py_buffer(Structure):
+ _fields_ = [ # noqa: RUF012
+ ("buf", c_void_p),
+ ("obj", py_object),
+ ("len", c_ssize_t),
+ ("itemsize", c_ssize_t),
+ ("readonly", c_int),
+ ("ndim", c_int),
+ ("format", c_char_p),
+ ("shape", c_ssize_p),
+ ("strides", c_ssize_p),
+ ("suboffsets", c_ssize_p),
+ ("internal", c_void_p),
+ ]
+
+ PyObject_GetBuffer = pythonapi.PyObject_GetBuffer
+ PyBuffer_Release = pythonapi.PyBuffer_Release
+
+ def get_buffer(obj: Buffer, writable: bool = False) -> Array[c_char]:
+ buf = Py_buffer()
+ flags: int = PyBUF_WRITABLE if writable else PyBUF_SIMPLE
+ PyObject_GetBuffer(py_object(obj), byref(buf), flags)
+
+ try:
+ buffer_type = c_char * buf.len
+ out: Array[c_char] = buffer_type.from_address(buf.buf)
+ return out
+ finally:
+ PyBuffer_Release(byref(buf))
+
+
+class _WindowsConsoleRawIOBase(io.RawIOBase):
+ def __init__(self, handle: int | None) -> None:
+ self.handle = handle
+
+ def isatty(self) -> t.Literal[True]:
+ super().isatty()
+ return True
+
+
+class _WindowsConsoleReader(_WindowsConsoleRawIOBase):
+ def readable(self) -> t.Literal[True]:
+ return True
+
+ def readinto(self, b: Buffer) -> int:
+ bytes_to_be_read = len(b)
+ if not bytes_to_be_read:
+ return 0
+ elif bytes_to_be_read % 2:
+ raise ValueError(
+ "cannot read odd number of bytes from UTF-16-LE encoded console"
+ )
+
+ buffer = get_buffer(b, writable=True)
+ code_units_to_be_read = bytes_to_be_read // 2
+ code_units_read = c_ulong()
+
+ rv = ReadConsoleW(
+ HANDLE(self.handle),
+ buffer,
+ code_units_to_be_read,
+ byref(code_units_read),
+ None,
+ )
+ if GetLastError() == ERROR_OPERATION_ABORTED:
+ # wait for KeyboardInterrupt
+ time.sleep(0.1)
+ if not rv:
+ raise OSError(f"Windows error: {GetLastError()}")
+
+ if buffer[0] == EOF:
+ return 0
+ return 2 * code_units_read.value
+
+
+class _WindowsConsoleWriter(_WindowsConsoleRawIOBase):
+ def writable(self) -> t.Literal[True]:
+ return True
+
+ @staticmethod
+ def _get_error_message(errno: int) -> str:
+ if errno == ERROR_SUCCESS:
+ return "ERROR_SUCCESS"
+ elif errno == ERROR_NOT_ENOUGH_MEMORY:
+ return "ERROR_NOT_ENOUGH_MEMORY"
+ return f"Windows error {errno}"
+
+ def write(self, b: Buffer) -> int:
+ bytes_to_be_written = len(b)
+ buf = get_buffer(b)
+ code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2
+ code_units_written = c_ulong()
+
+ WriteConsoleW(
+ HANDLE(self.handle),
+ buf,
+ code_units_to_be_written,
+ byref(code_units_written),
+ None,
+ )
+ bytes_written = 2 * code_units_written.value
+
+ if bytes_written == 0 and bytes_to_be_written > 0:
+ raise OSError(self._get_error_message(GetLastError()))
+ return bytes_written
+
+
+class ConsoleStream:
+ def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None:
+ self._text_stream = text_stream
+ self.buffer = byte_stream
+
+ @property
+ def name(self) -> str:
+ return self.buffer.name
+
+ def write(self, x: t.AnyStr) -> int:
+ if isinstance(x, str):
+ return self._text_stream.write(x)
+ try:
+ self.flush()
+ except Exception:
+ pass
+ return self.buffer.write(x)
+
+ def writelines(self, lines: cabc.Iterable[t.AnyStr]) -> None:
+ for line in lines:
+ self.write(line)
+
+ def __getattr__(self, name: str) -> t.Any:
+ return getattr(self._text_stream, name)
+
+ def isatty(self) -> bool:
+ return self.buffer.isatty()
+
+ def __repr__(self) -> str:
+ return f""
+
+
+def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO:
+ text_stream = _NonClosingTextIOWrapper(
+ io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),
+ "utf-16-le",
+ "strict",
+ line_buffering=True,
+ )
+ return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
+
+
+def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO:
+ text_stream = _NonClosingTextIOWrapper(
+ io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),
+ "utf-16-le",
+ "strict",
+ line_buffering=True,
+ )
+ return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
+
+
+def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO:
+ text_stream = _NonClosingTextIOWrapper(
+ io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),
+ "utf-16-le",
+ "strict",
+ line_buffering=True,
+ )
+ return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
+
+
+_stream_factories: cabc.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = {
+ 0: _get_text_stdin,
+ 1: _get_text_stdout,
+ 2: _get_text_stderr,
+}
+
+
+def _is_console(f: t.TextIO) -> bool:
+ if not hasattr(f, "fileno"):
+ return False
+
+ try:
+ fileno = f.fileno()
+ except (OSError, io.UnsupportedOperation):
+ return False
+
+ handle = msvcrt.get_osfhandle(fileno)
+ return bool(GetConsoleMode(handle, byref(DWORD())))
+
+
+def _get_windows_console_stream(
+ f: t.TextIO, encoding: str | None, errors: str | None
+) -> t.TextIO | None:
+ if (
+ get_buffer is None
+ or encoding not in {"utf-16-le", None}
+ or errors not in {"strict", None}
+ or not _is_console(f)
+ ):
+ return None
+
+ func = _stream_factories.get(f.fileno())
+ if func is None:
+ return None
+
+ b = getattr(f, "buffer", None)
+
+ if b is None:
+ return None
+
+ return func(b)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/core.py b/stripe_config/.venv/lib/python3.12/site-packages/click/core.py
new file mode 100644
index 0000000..ff2f74a
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/core.py
@@ -0,0 +1,3347 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import enum
+import errno
+import inspect
+import os
+import sys
+import typing as t
+from collections import abc
+from collections import Counter
+from contextlib import AbstractContextManager
+from contextlib import contextmanager
+from contextlib import ExitStack
+from functools import update_wrapper
+from gettext import gettext as _
+from gettext import ngettext
+from itertools import repeat
+from types import TracebackType
+
+from . import types
+from ._utils import FLAG_NEEDS_VALUE
+from ._utils import UNSET
+from .exceptions import Abort
+from .exceptions import BadParameter
+from .exceptions import ClickException
+from .exceptions import Exit
+from .exceptions import MissingParameter
+from .exceptions import NoArgsIsHelpError
+from .exceptions import UsageError
+from .formatting import HelpFormatter
+from .formatting import join_options
+from .globals import pop_context
+from .globals import push_context
+from .parser import _OptionParser
+from .parser import _split_opt
+from .termui import confirm
+from .termui import prompt
+from .termui import style
+from .utils import _detect_program_name
+from .utils import _expand_args
+from .utils import echo
+from .utils import make_default_short_help
+from .utils import make_str
+from .utils import PacifyFlushWrapper
+
+if t.TYPE_CHECKING:
+ from .shell_completion import CompletionItem
+
+F = t.TypeVar("F", bound="t.Callable[..., t.Any]")
+V = t.TypeVar("V")
+
+
+def _complete_visible_commands(
+ ctx: Context, incomplete: str
+) -> cabc.Iterator[tuple[str, Command]]:
+ """List all the subcommands of a group that start with the
+ incomplete value and aren't hidden.
+
+ :param ctx: Invocation context for the group.
+ :param incomplete: Value being completed. May be empty.
+ """
+ multi = t.cast(Group, ctx.command)
+
+ for name in multi.list_commands(ctx):
+ if name.startswith(incomplete):
+ command = multi.get_command(ctx, name)
+
+ if command is not None and not command.hidden:
+ yield name, command
+
+
+def _check_nested_chain(
+ base_command: Group, cmd_name: str, cmd: Command, register: bool = False
+) -> None:
+ if not base_command.chain or not isinstance(cmd, Group):
+ return
+
+ if register:
+ message = (
+ f"It is not possible to add the group {cmd_name!r} to another"
+ f" group {base_command.name!r} that is in chain mode."
+ )
+ else:
+ message = (
+ f"Found the group {cmd_name!r} as subcommand to another group "
+ f" {base_command.name!r} that is in chain mode. This is not supported."
+ )
+
+ raise RuntimeError(message)
+
+
+def batch(iterable: cabc.Iterable[V], batch_size: int) -> list[tuple[V, ...]]:
+ return list(zip(*repeat(iter(iterable), batch_size), strict=False))
+
+
+@contextmanager
+def augment_usage_errors(
+ ctx: Context, param: Parameter | None = None
+) -> cabc.Iterator[None]:
+ """Context manager that attaches extra information to exceptions."""
+ try:
+ yield
+ except BadParameter as e:
+ if e.ctx is None:
+ e.ctx = ctx
+ if param is not None and e.param is None:
+ e.param = param
+ raise
+ except UsageError as e:
+ if e.ctx is None:
+ e.ctx = ctx
+ raise
+
+
+def iter_params_for_processing(
+ invocation_order: cabc.Sequence[Parameter],
+ declaration_order: cabc.Sequence[Parameter],
+) -> list[Parameter]:
+ """Returns all declared parameters in the order they should be processed.
+
+ The declared parameters are re-shuffled depending on the order in which
+ they were invoked, as well as the eagerness of each parameters.
+
+ The invocation order takes precedence over the declaration order. I.e. the
+ order in which the user provided them to the CLI is respected.
+
+ This behavior and its effect on callback evaluation is detailed at:
+ https://click.palletsprojects.com/en/stable/advanced/#callback-evaluation-order
+ """
+
+ def sort_key(item: Parameter) -> tuple[bool, float]:
+ try:
+ idx: float = invocation_order.index(item)
+ except ValueError:
+ idx = float("inf")
+
+ return not item.is_eager, idx
+
+ return sorted(declaration_order, key=sort_key)
+
+
+class ParameterSource(enum.Enum):
+ """This is an :class:`~enum.Enum` that indicates the source of a
+ parameter's value.
+
+ Use :meth:`click.Context.get_parameter_source` to get the
+ source for a parameter by name.
+
+ .. versionchanged:: 8.0
+ Use :class:`~enum.Enum` and drop the ``validate`` method.
+
+ .. versionchanged:: 8.0
+ Added the ``PROMPT`` value.
+ """
+
+ COMMANDLINE = enum.auto()
+ """The value was provided by the command line args."""
+ ENVIRONMENT = enum.auto()
+ """The value was provided with an environment variable."""
+ DEFAULT = enum.auto()
+ """Used the default specified by the parameter."""
+ DEFAULT_MAP = enum.auto()
+ """Used a default provided by :attr:`Context.default_map`."""
+ PROMPT = enum.auto()
+ """Used a prompt to confirm a default or provide a value."""
+
+
+class Context:
+ """The context is a special internal object that holds state relevant
+ for the script execution at every single level. It's normally invisible
+ to commands unless they opt-in to getting access to it.
+
+ The context is useful as it can pass internal objects around and can
+ control special execution features such as reading data from
+ environment variables.
+
+ A context can be used as context manager in which case it will call
+ :meth:`close` on teardown.
+
+ :param command: the command class for this context.
+ :param parent: the parent context.
+ :param info_name: the info name for this invocation. Generally this
+ is the most descriptive name for the script or
+ command. For the toplevel script it is usually
+ the name of the script, for commands below it it's
+ the name of the script.
+ :param obj: an arbitrary object of user data.
+ :param auto_envvar_prefix: the prefix to use for automatic environment
+ variables. If this is `None` then reading
+ from environment variables is disabled. This
+ does not affect manually set environment
+ variables which are always read.
+ :param default_map: a dictionary (like object) with default values
+ for parameters.
+ :param terminal_width: the width of the terminal. The default is
+ inherit from parent context. If no context
+ defines the terminal width then auto
+ detection will be applied.
+ :param max_content_width: the maximum width for content rendered by
+ Click (this currently only affects help
+ pages). This defaults to 80 characters if
+ not overridden. In other words: even if the
+ terminal is larger than that, Click will not
+ format things wider than 80 characters by
+ default. In addition to that, formatters might
+ add some safety mapping on the right.
+ :param resilient_parsing: if this flag is enabled then Click will
+ parse without any interactivity or callback
+ invocation. Default values will also be
+ ignored. This is useful for implementing
+ things such as completion support.
+ :param allow_extra_args: if this is set to `True` then extra arguments
+ at the end will not raise an error and will be
+ kept on the context. The default is to inherit
+ from the command.
+ :param allow_interspersed_args: if this is set to `False` then options
+ and arguments cannot be mixed. The
+ default is to inherit from the command.
+ :param ignore_unknown_options: instructs click to ignore options it does
+ not know and keeps them for later
+ processing.
+ :param help_option_names: optionally a list of strings that define how
+ the default help parameter is named. The
+ default is ``['--help']``.
+ :param token_normalize_func: an optional function that is used to
+ normalize tokens (options, choices,
+ etc.). This for instance can be used to
+ implement case insensitive behavior.
+ :param color: controls if the terminal supports ANSI colors or not. The
+ default is autodetection. This is only needed if ANSI
+ codes are used in texts that Click prints which is by
+ default not the case. This for instance would affect
+ help output.
+ :param show_default: Show the default value for commands. If this
+ value is not set, it defaults to the value from the parent
+ context. ``Command.show_default`` overrides this default for the
+ specific command.
+
+ .. versionchanged:: 8.2
+ The ``protected_args`` attribute is deprecated and will be removed in
+ Click 9.0. ``args`` will contain remaining unparsed tokens.
+
+ .. versionchanged:: 8.1
+ The ``show_default`` parameter is overridden by
+ ``Command.show_default``, instead of the other way around.
+
+ .. versionchanged:: 8.0
+ The ``show_default`` parameter defaults to the value from the
+ parent context.
+
+ .. versionchanged:: 7.1
+ Added the ``show_default`` parameter.
+
+ .. versionchanged:: 4.0
+ Added the ``color``, ``ignore_unknown_options``, and
+ ``max_content_width`` parameters.
+
+ .. versionchanged:: 3.0
+ Added the ``allow_extra_args`` and ``allow_interspersed_args``
+ parameters.
+
+ .. versionchanged:: 2.0
+ Added the ``resilient_parsing``, ``help_option_names``, and
+ ``token_normalize_func`` parameters.
+ """
+
+ #: The formatter class to create with :meth:`make_formatter`.
+ #:
+ #: .. versionadded:: 8.0
+ formatter_class: type[HelpFormatter] = HelpFormatter
+
+ def __init__(
+ self,
+ command: Command,
+ parent: Context | None = None,
+ info_name: str | None = None,
+ obj: t.Any | None = None,
+ auto_envvar_prefix: str | None = None,
+ default_map: cabc.MutableMapping[str, t.Any] | None = None,
+ terminal_width: int | None = None,
+ max_content_width: int | None = None,
+ resilient_parsing: bool = False,
+ allow_extra_args: bool | None = None,
+ allow_interspersed_args: bool | None = None,
+ ignore_unknown_options: bool | None = None,
+ help_option_names: list[str] | None = None,
+ token_normalize_func: t.Callable[[str], str] | None = None,
+ color: bool | None = None,
+ show_default: bool | None = None,
+ ) -> None:
+ #: the parent context or `None` if none exists.
+ self.parent = parent
+ #: the :class:`Command` for this context.
+ self.command = command
+ #: the descriptive information name
+ self.info_name = info_name
+ #: Map of parameter names to their parsed values. Parameters
+ #: with ``expose_value=False`` are not stored.
+ self.params: dict[str, t.Any] = {}
+ #: the leftover arguments.
+ self.args: list[str] = []
+ #: protected arguments. These are arguments that are prepended
+ #: to `args` when certain parsing scenarios are encountered but
+ #: must be never propagated to another arguments. This is used
+ #: to implement nested parsing.
+ self._protected_args: list[str] = []
+ #: the collected prefixes of the command's options.
+ self._opt_prefixes: set[str] = set(parent._opt_prefixes) if parent else set()
+
+ if obj is None and parent is not None:
+ obj = parent.obj
+
+ #: the user object stored.
+ self.obj: t.Any = obj
+ self._meta: dict[str, t.Any] = getattr(parent, "meta", {})
+
+ #: A dictionary (-like object) with defaults for parameters.
+ if (
+ default_map is None
+ and info_name is not None
+ and parent is not None
+ and parent.default_map is not None
+ ):
+ default_map = parent.default_map.get(info_name)
+
+ self.default_map: cabc.MutableMapping[str, t.Any] | None = default_map
+
+ #: This flag indicates if a subcommand is going to be executed. A
+ #: group callback can use this information to figure out if it's
+ #: being executed directly or because the execution flow passes
+ #: onwards to a subcommand. By default it's None, but it can be
+ #: the name of the subcommand to execute.
+ #:
+ #: If chaining is enabled this will be set to ``'*'`` in case
+ #: any commands are executed. It is however not possible to
+ #: figure out which ones. If you require this knowledge you
+ #: should use a :func:`result_callback`.
+ self.invoked_subcommand: str | None = None
+
+ if terminal_width is None and parent is not None:
+ terminal_width = parent.terminal_width
+
+ #: The width of the terminal (None is autodetection).
+ self.terminal_width: int | None = terminal_width
+
+ if max_content_width is None and parent is not None:
+ max_content_width = parent.max_content_width
+
+ #: The maximum width of formatted content (None implies a sensible
+ #: default which is 80 for most things).
+ self.max_content_width: int | None = max_content_width
+
+ if allow_extra_args is None:
+ allow_extra_args = command.allow_extra_args
+
+ #: Indicates if the context allows extra args or if it should
+ #: fail on parsing.
+ #:
+ #: .. versionadded:: 3.0
+ self.allow_extra_args = allow_extra_args
+
+ if allow_interspersed_args is None:
+ allow_interspersed_args = command.allow_interspersed_args
+
+ #: Indicates if the context allows mixing of arguments and
+ #: options or not.
+ #:
+ #: .. versionadded:: 3.0
+ self.allow_interspersed_args: bool = allow_interspersed_args
+
+ if ignore_unknown_options is None:
+ ignore_unknown_options = command.ignore_unknown_options
+
+ #: Instructs click to ignore options that a command does not
+ #: understand and will store it on the context for later
+ #: processing. This is primarily useful for situations where you
+ #: want to call into external programs. Generally this pattern is
+ #: strongly discouraged because it's not possibly to losslessly
+ #: forward all arguments.
+ #:
+ #: .. versionadded:: 4.0
+ self.ignore_unknown_options: bool = ignore_unknown_options
+
+ if help_option_names is None:
+ if parent is not None:
+ help_option_names = parent.help_option_names
+ else:
+ help_option_names = ["--help"]
+
+ #: The names for the help options.
+ self.help_option_names: list[str] = help_option_names
+
+ if token_normalize_func is None and parent is not None:
+ token_normalize_func = parent.token_normalize_func
+
+ #: An optional normalization function for tokens. This is
+ #: options, choices, commands etc.
+ self.token_normalize_func: t.Callable[[str], str] | None = token_normalize_func
+
+ #: Indicates if resilient parsing is enabled. In that case Click
+ #: will do its best to not cause any failures and default values
+ #: will be ignored. Useful for completion.
+ self.resilient_parsing: bool = resilient_parsing
+
+ # If there is no envvar prefix yet, but the parent has one and
+ # the command on this level has a name, we can expand the envvar
+ # prefix automatically.
+ if auto_envvar_prefix is None:
+ if (
+ parent is not None
+ and parent.auto_envvar_prefix is not None
+ and self.info_name is not None
+ ):
+ auto_envvar_prefix = (
+ f"{parent.auto_envvar_prefix}_{self.info_name.upper()}"
+ )
+ else:
+ auto_envvar_prefix = auto_envvar_prefix.upper()
+
+ if auto_envvar_prefix is not None:
+ auto_envvar_prefix = auto_envvar_prefix.replace("-", "_")
+
+ self.auto_envvar_prefix: str | None = auto_envvar_prefix
+
+ if color is None and parent is not None:
+ color = parent.color
+
+ #: Controls if styling output is wanted or not.
+ self.color: bool | None = color
+
+ if show_default is None and parent is not None:
+ show_default = parent.show_default
+
+ #: Show option default values when formatting help text.
+ self.show_default: bool | None = show_default
+
+ self._close_callbacks: list[t.Callable[[], t.Any]] = []
+ self._depth = 0
+ self._parameter_source: dict[str, ParameterSource] = {}
+ self._exit_stack = ExitStack()
+
+ @property
+ def protected_args(self) -> list[str]:
+ import warnings
+
+ warnings.warn(
+ "'protected_args' is deprecated and will be removed in Click 9.0."
+ " 'args' will contain remaining unparsed tokens.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self._protected_args
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ """Gather information that could be useful for a tool generating
+ user-facing documentation. This traverses the entire CLI
+ structure.
+
+ .. code-block:: python
+
+ with Context(cli) as ctx:
+ info = ctx.to_info_dict()
+
+ .. versionadded:: 8.0
+ """
+ return {
+ "command": self.command.to_info_dict(self),
+ "info_name": self.info_name,
+ "allow_extra_args": self.allow_extra_args,
+ "allow_interspersed_args": self.allow_interspersed_args,
+ "ignore_unknown_options": self.ignore_unknown_options,
+ "auto_envvar_prefix": self.auto_envvar_prefix,
+ }
+
+ def __enter__(self) -> Context:
+ self._depth += 1
+ push_context(self)
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> bool | None:
+ self._depth -= 1
+ exit_result: bool | None = None
+ if self._depth == 0:
+ exit_result = self._close_with_exception_info(exc_type, exc_value, tb)
+ pop_context()
+
+ return exit_result
+
+ @contextmanager
+ def scope(self, cleanup: bool = True) -> cabc.Iterator[Context]:
+ """This helper method can be used with the context object to promote
+ it to the current thread local (see :func:`get_current_context`).
+ The default behavior of this is to invoke the cleanup functions which
+ can be disabled by setting `cleanup` to `False`. The cleanup
+ functions are typically used for things such as closing file handles.
+
+ If the cleanup is intended the context object can also be directly
+ used as a context manager.
+
+ Example usage::
+
+ with ctx.scope():
+ assert get_current_context() is ctx
+
+ This is equivalent::
+
+ with ctx:
+ assert get_current_context() is ctx
+
+ .. versionadded:: 5.0
+
+ :param cleanup: controls if the cleanup functions should be run or
+ not. The default is to run these functions. In
+ some situations the context only wants to be
+ temporarily pushed in which case this can be disabled.
+ Nested pushes automatically defer the cleanup.
+ """
+ if not cleanup:
+ self._depth += 1
+ try:
+ with self as rv:
+ yield rv
+ finally:
+ if not cleanup:
+ self._depth -= 1
+
+ @property
+ def meta(self) -> dict[str, t.Any]:
+ """This is a dictionary which is shared with all the contexts
+ that are nested. It exists so that click utilities can store some
+ state here if they need to. It is however the responsibility of
+ that code to manage this dictionary well.
+
+ The keys are supposed to be unique dotted strings. For instance
+ module paths are a good choice for it. What is stored in there is
+ irrelevant for the operation of click. However what is important is
+ that code that places data here adheres to the general semantics of
+ the system.
+
+ Example usage::
+
+ LANG_KEY = f'{__name__}.lang'
+
+ def set_language(value):
+ ctx = get_current_context()
+ ctx.meta[LANG_KEY] = value
+
+ def get_language():
+ return get_current_context().meta.get(LANG_KEY, 'en_US')
+
+ .. versionadded:: 5.0
+ """
+ return self._meta
+
+ def make_formatter(self) -> HelpFormatter:
+ """Creates the :class:`~click.HelpFormatter` for the help and
+ usage output.
+
+ To quickly customize the formatter class used without overriding
+ this method, set the :attr:`formatter_class` attribute.
+
+ .. versionchanged:: 8.0
+ Added the :attr:`formatter_class` attribute.
+ """
+ return self.formatter_class(
+ width=self.terminal_width, max_width=self.max_content_width
+ )
+
+ def with_resource(self, context_manager: AbstractContextManager[V]) -> V:
+ """Register a resource as if it were used in a ``with``
+ statement. The resource will be cleaned up when the context is
+ popped.
+
+ Uses :meth:`contextlib.ExitStack.enter_context`. It calls the
+ resource's ``__enter__()`` method and returns the result. When
+ the context is popped, it closes the stack, which calls the
+ resource's ``__exit__()`` method.
+
+ To register a cleanup function for something that isn't a
+ context manager, use :meth:`call_on_close`. Or use something
+ from :mod:`contextlib` to turn it into a context manager first.
+
+ .. code-block:: python
+
+ @click.group()
+ @click.option("--name")
+ @click.pass_context
+ def cli(ctx):
+ ctx.obj = ctx.with_resource(connect_db(name))
+
+ :param context_manager: The context manager to enter.
+ :return: Whatever ``context_manager.__enter__()`` returns.
+
+ .. versionadded:: 8.0
+ """
+ return self._exit_stack.enter_context(context_manager)
+
+ def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]:
+ """Register a function to be called when the context tears down.
+
+ This can be used to close resources opened during the script
+ execution. Resources that support Python's context manager
+ protocol which would be used in a ``with`` statement should be
+ registered with :meth:`with_resource` instead.
+
+ :param f: The function to execute on teardown.
+ """
+ return self._exit_stack.callback(f)
+
+ def close(self) -> None:
+ """Invoke all close callbacks registered with
+ :meth:`call_on_close`, and exit all context managers entered
+ with :meth:`with_resource`.
+ """
+ self._close_with_exception_info(None, None, None)
+
+ def _close_with_exception_info(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> bool | None:
+ """Unwind the exit stack by calling its :meth:`__exit__` providing the exception
+ information to allow for exception handling by the various resources registered
+ using :meth;`with_resource`
+
+ :return: Whatever ``exit_stack.__exit__()`` returns.
+ """
+ exit_result = self._exit_stack.__exit__(exc_type, exc_value, tb)
+ # In case the context is reused, create a new exit stack.
+ self._exit_stack = ExitStack()
+
+ return exit_result
+
+ @property
+ def command_path(self) -> str:
+ """The computed command path. This is used for the ``usage``
+ information on the help page. It's automatically created by
+ combining the info names of the chain of contexts to the root.
+ """
+ rv = ""
+ if self.info_name is not None:
+ rv = self.info_name
+ if self.parent is not None:
+ parent_command_path = [self.parent.command_path]
+
+ if isinstance(self.parent.command, Command):
+ for param in self.parent.command.get_params(self):
+ parent_command_path.extend(param.get_usage_pieces(self))
+
+ rv = f"{' '.join(parent_command_path)} {rv}"
+ return rv.lstrip()
+
+ def find_root(self) -> Context:
+ """Finds the outermost context."""
+ node = self
+ while node.parent is not None:
+ node = node.parent
+ return node
+
+ def find_object(self, object_type: type[V]) -> V | None:
+ """Finds the closest object of a given type."""
+ node: Context | None = self
+
+ while node is not None:
+ if isinstance(node.obj, object_type):
+ return node.obj
+
+ node = node.parent
+
+ return None
+
+ def ensure_object(self, object_type: type[V]) -> V:
+ """Like :meth:`find_object` but sets the innermost object to a
+ new instance of `object_type` if it does not exist.
+ """
+ rv = self.find_object(object_type)
+ if rv is None:
+ self.obj = rv = object_type()
+ return rv
+
+ @t.overload
+ def lookup_default(
+ self, name: str, call: t.Literal[True] = True
+ ) -> t.Any | None: ...
+
+ @t.overload
+ def lookup_default(
+ self, name: str, call: t.Literal[False] = ...
+ ) -> t.Any | t.Callable[[], t.Any] | None: ...
+
+ def lookup_default(self, name: str, call: bool = True) -> t.Any | None:
+ """Get the default for a parameter from :attr:`default_map`.
+
+ :param name: Name of the parameter.
+ :param call: If the default is a callable, call it. Disable to
+ return the callable instead.
+
+ .. versionchanged:: 8.0
+ Added the ``call`` parameter.
+ """
+ if self.default_map is not None:
+ value = self.default_map.get(name, UNSET)
+
+ if call and callable(value):
+ return value()
+
+ return value
+
+ return UNSET
+
+ def fail(self, message: str) -> t.NoReturn:
+ """Aborts the execution of the program with a specific error
+ message.
+
+ :param message: the error message to fail with.
+ """
+ raise UsageError(message, self)
+
+ def abort(self) -> t.NoReturn:
+ """Aborts the script."""
+ raise Abort()
+
+ def exit(self, code: int = 0) -> t.NoReturn:
+ """Exits the application with a given exit code.
+
+ .. versionchanged:: 8.2
+ Callbacks and context managers registered with :meth:`call_on_close`
+ and :meth:`with_resource` are closed before exiting.
+ """
+ self.close()
+ raise Exit(code)
+
+ def get_usage(self) -> str:
+ """Helper method to get formatted usage string for the current
+ context and command.
+ """
+ return self.command.get_usage(self)
+
+ def get_help(self) -> str:
+ """Helper method to get formatted help page for the current
+ context and command.
+ """
+ return self.command.get_help(self)
+
+ def _make_sub_context(self, command: Command) -> Context:
+ """Create a new context of the same type as this context, but
+ for a new command.
+
+ :meta private:
+ """
+ return type(self)(command, info_name=command.name, parent=self)
+
+ @t.overload
+ def invoke(
+ self, callback: t.Callable[..., V], /, *args: t.Any, **kwargs: t.Any
+ ) -> V: ...
+
+ @t.overload
+ def invoke(self, callback: Command, /, *args: t.Any, **kwargs: t.Any) -> t.Any: ...
+
+ def invoke(
+ self, callback: Command | t.Callable[..., V], /, *args: t.Any, **kwargs: t.Any
+ ) -> t.Any | V:
+ """Invokes a command callback in exactly the way it expects. There
+ are two ways to invoke this method:
+
+ 1. the first argument can be a callback and all other arguments and
+ keyword arguments are forwarded directly to the function.
+ 2. the first argument is a click command object. In that case all
+ arguments are forwarded as well but proper click parameters
+ (options and click arguments) must be keyword arguments and Click
+ will fill in defaults.
+
+ .. versionchanged:: 8.0
+ All ``kwargs`` are tracked in :attr:`params` so they will be
+ passed if :meth:`forward` is called at multiple levels.
+
+ .. versionchanged:: 3.2
+ A new context is created, and missing arguments use default values.
+ """
+ if isinstance(callback, Command):
+ other_cmd = callback
+
+ if other_cmd.callback is None:
+ raise TypeError(
+ "The given command does not have a callback that can be invoked."
+ )
+ else:
+ callback = t.cast("t.Callable[..., V]", other_cmd.callback)
+
+ ctx = self._make_sub_context(other_cmd)
+
+ for param in other_cmd.params:
+ if param.name not in kwargs and param.expose_value:
+ kwargs[param.name] = param.type_cast_value( # type: ignore
+ ctx, param.get_default(ctx)
+ )
+
+ # Track all kwargs as params, so that forward() will pass
+ # them on in subsequent calls.
+ ctx.params.update(kwargs)
+ else:
+ ctx = self
+
+ with augment_usage_errors(self):
+ with ctx:
+ return callback(*args, **kwargs)
+
+ def forward(self, cmd: Command, /, *args: t.Any, **kwargs: t.Any) -> t.Any:
+ """Similar to :meth:`invoke` but fills in default keyword
+ arguments from the current context if the other command expects
+ it. This cannot invoke callbacks directly, only other commands.
+
+ .. versionchanged:: 8.0
+ All ``kwargs`` are tracked in :attr:`params` so they will be
+ passed if ``forward`` is called at multiple levels.
+ """
+ # Can only forward to other commands, not direct callbacks.
+ if not isinstance(cmd, Command):
+ raise TypeError("Callback is not a command.")
+
+ for param in self.params:
+ if param not in kwargs:
+ kwargs[param] = self.params[param]
+
+ return self.invoke(cmd, *args, **kwargs)
+
+ def set_parameter_source(self, name: str, source: ParameterSource) -> None:
+ """Set the source of a parameter. This indicates the location
+ from which the value of the parameter was obtained.
+
+ :param name: The name of the parameter.
+ :param source: A member of :class:`~click.core.ParameterSource`.
+ """
+ self._parameter_source[name] = source
+
+ def get_parameter_source(self, name: str) -> ParameterSource | None:
+ """Get the source of a parameter. This indicates the location
+ from which the value of the parameter was obtained.
+
+ This can be useful for determining when a user specified a value
+ on the command line that is the same as the default value. It
+ will be :attr:`~click.core.ParameterSource.DEFAULT` only if the
+ value was actually taken from the default.
+
+ :param name: The name of the parameter.
+ :rtype: ParameterSource
+
+ .. versionchanged:: 8.0
+ Returns ``None`` if the parameter was not provided from any
+ source.
+ """
+ return self._parameter_source.get(name)
+
+
+class Command:
+ """Commands are the basic building block of command line interfaces in
+ Click. A basic command handles command line parsing and might dispatch
+ more parsing to commands nested below it.
+
+ :param name: the name of the command to use unless a group overrides it.
+ :param context_settings: an optional dictionary with defaults that are
+ passed to the context object.
+ :param callback: the callback to invoke. This is optional.
+ :param params: the parameters to register with this command. This can
+ be either :class:`Option` or :class:`Argument` objects.
+ :param help: the help string to use for this command.
+ :param epilog: like the help string but it's printed at the end of the
+ help page after everything else.
+ :param short_help: the short help to use for this command. This is
+ shown on the command listing of the parent command.
+ :param add_help_option: by default each command registers a ``--help``
+ option. This can be disabled by this parameter.
+ :param no_args_is_help: this controls what happens if no arguments are
+ provided. This option is disabled by default.
+ If enabled this will add ``--help`` as argument
+ if no arguments are passed
+ :param hidden: hide this command from help outputs.
+ :param deprecated: If ``True`` or non-empty string, issues a message
+ indicating that the command is deprecated and highlights
+ its deprecation in --help. The message can be customized
+ by using a string as the value.
+
+ .. versionchanged:: 8.2
+ This is the base class for all commands, not ``BaseCommand``.
+ ``deprecated`` can be set to a string as well to customize the
+ deprecation message.
+
+ .. versionchanged:: 8.1
+ ``help``, ``epilog``, and ``short_help`` are stored unprocessed,
+ all formatting is done when outputting help text, not at init,
+ and is done even if not using the ``@command`` decorator.
+
+ .. versionchanged:: 8.0
+ Added a ``repr`` showing the command name.
+
+ .. versionchanged:: 7.1
+ Added the ``no_args_is_help`` parameter.
+
+ .. versionchanged:: 2.0
+ Added the ``context_settings`` parameter.
+ """
+
+ #: The context class to create with :meth:`make_context`.
+ #:
+ #: .. versionadded:: 8.0
+ context_class: type[Context] = Context
+
+ #: the default for the :attr:`Context.allow_extra_args` flag.
+ allow_extra_args = False
+
+ #: the default for the :attr:`Context.allow_interspersed_args` flag.
+ allow_interspersed_args = True
+
+ #: the default for the :attr:`Context.ignore_unknown_options` flag.
+ ignore_unknown_options = False
+
+ def __init__(
+ self,
+ name: str | None,
+ context_settings: cabc.MutableMapping[str, t.Any] | None = None,
+ callback: t.Callable[..., t.Any] | None = None,
+ params: list[Parameter] | None = None,
+ help: str | None = None,
+ epilog: str | None = None,
+ short_help: str | None = None,
+ options_metavar: str | None = "[OPTIONS]",
+ add_help_option: bool = True,
+ no_args_is_help: bool = False,
+ hidden: bool = False,
+ deprecated: bool | str = False,
+ ) -> None:
+ #: the name the command thinks it has. Upon registering a command
+ #: on a :class:`Group` the group will default the command name
+ #: with this information. You should instead use the
+ #: :class:`Context`\'s :attr:`~Context.info_name` attribute.
+ self.name = name
+
+ if context_settings is None:
+ context_settings = {}
+
+ #: an optional dictionary with defaults passed to the context.
+ self.context_settings: cabc.MutableMapping[str, t.Any] = context_settings
+
+ #: the callback to execute when the command fires. This might be
+ #: `None` in which case nothing happens.
+ self.callback = callback
+ #: the list of parameters for this command in the order they
+ #: should show up in the help page and execute. Eager parameters
+ #: will automatically be handled before non eager ones.
+ self.params: list[Parameter] = params or []
+ self.help = help
+ self.epilog = epilog
+ self.options_metavar = options_metavar
+ self.short_help = short_help
+ self.add_help_option = add_help_option
+ self._help_option = None
+ self.no_args_is_help = no_args_is_help
+ self.hidden = hidden
+ self.deprecated = deprecated
+
+ def to_info_dict(self, ctx: Context) -> dict[str, t.Any]:
+ return {
+ "name": self.name,
+ "params": [param.to_info_dict() for param in self.get_params(ctx)],
+ "help": self.help,
+ "epilog": self.epilog,
+ "short_help": self.short_help,
+ "hidden": self.hidden,
+ "deprecated": self.deprecated,
+ }
+
+ def __repr__(self) -> str:
+ return f"<{self.__class__.__name__} {self.name}>"
+
+ def get_usage(self, ctx: Context) -> str:
+ """Formats the usage line into a string and returns it.
+
+ Calls :meth:`format_usage` internally.
+ """
+ formatter = ctx.make_formatter()
+ self.format_usage(ctx, formatter)
+ return formatter.getvalue().rstrip("\n")
+
+ def get_params(self, ctx: Context) -> list[Parameter]:
+ params = self.params
+ help_option = self.get_help_option(ctx)
+
+ if help_option is not None:
+ params = [*params, help_option]
+
+ if __debug__:
+ import warnings
+
+ opts = [opt for param in params for opt in param.opts]
+ opts_counter = Counter(opts)
+ duplicate_opts = (opt for opt, count in opts_counter.items() if count > 1)
+
+ for duplicate_opt in duplicate_opts:
+ warnings.warn(
+ (
+ f"The parameter {duplicate_opt} is used more than once. "
+ "Remove its duplicate as parameters should be unique."
+ ),
+ stacklevel=3,
+ )
+
+ return params
+
+ def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Writes the usage line into the formatter.
+
+ This is a low-level method called by :meth:`get_usage`.
+ """
+ pieces = self.collect_usage_pieces(ctx)
+ formatter.write_usage(ctx.command_path, " ".join(pieces))
+
+ def collect_usage_pieces(self, ctx: Context) -> list[str]:
+ """Returns all the pieces that go into the usage line and returns
+ it as a list of strings.
+ """
+ rv = [self.options_metavar] if self.options_metavar else []
+
+ for param in self.get_params(ctx):
+ rv.extend(param.get_usage_pieces(ctx))
+
+ return rv
+
+ def get_help_option_names(self, ctx: Context) -> list[str]:
+ """Returns the names for the help option."""
+ all_names = set(ctx.help_option_names)
+ for param in self.params:
+ all_names.difference_update(param.opts)
+ all_names.difference_update(param.secondary_opts)
+ return list(all_names)
+
+ def get_help_option(self, ctx: Context) -> Option | None:
+ """Returns the help option object.
+
+ Skipped if :attr:`add_help_option` is ``False``.
+
+ .. versionchanged:: 8.1.8
+ The help option is now cached to avoid creating it multiple times.
+ """
+ help_option_names = self.get_help_option_names(ctx)
+
+ if not help_option_names or not self.add_help_option:
+ return None
+
+ # Cache the help option object in private _help_option attribute to
+ # avoid creating it multiple times. Not doing this will break the
+ # callback odering by iter_params_for_processing(), which relies on
+ # object comparison.
+ if self._help_option is None:
+ # Avoid circular import.
+ from .decorators import help_option
+
+ # Apply help_option decorator and pop resulting option
+ help_option(*help_option_names)(self)
+ self._help_option = self.params.pop() # type: ignore[assignment]
+
+ return self._help_option
+
+ def make_parser(self, ctx: Context) -> _OptionParser:
+ """Creates the underlying option parser for this command."""
+ parser = _OptionParser(ctx)
+ for param in self.get_params(ctx):
+ param.add_to_parser(parser, ctx)
+ return parser
+
+ def get_help(self, ctx: Context) -> str:
+ """Formats the help into a string and returns it.
+
+ Calls :meth:`format_help` internally.
+ """
+ formatter = ctx.make_formatter()
+ self.format_help(ctx, formatter)
+ return formatter.getvalue().rstrip("\n")
+
+ def get_short_help_str(self, limit: int = 45) -> str:
+ """Gets short help for the command or makes it by shortening the
+ long help string.
+ """
+ if self.short_help:
+ text = inspect.cleandoc(self.short_help)
+ elif self.help:
+ text = make_default_short_help(self.help, limit)
+ else:
+ text = ""
+
+ if self.deprecated:
+ deprecated_message = (
+ f"(DEPRECATED: {self.deprecated})"
+ if isinstance(self.deprecated, str)
+ else "(DEPRECATED)"
+ )
+ text = _("{text} {deprecated_message}").format(
+ text=text, deprecated_message=deprecated_message
+ )
+
+ return text.strip()
+
+ def format_help(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Writes the help into the formatter if it exists.
+
+ This is a low-level method called by :meth:`get_help`.
+
+ This calls the following methods:
+
+ - :meth:`format_usage`
+ - :meth:`format_help_text`
+ - :meth:`format_options`
+ - :meth:`format_epilog`
+ """
+ self.format_usage(ctx, formatter)
+ self.format_help_text(ctx, formatter)
+ self.format_options(ctx, formatter)
+ self.format_epilog(ctx, formatter)
+
+ def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Writes the help text to the formatter if it exists."""
+ if self.help is not None:
+ # truncate the help text to the first form feed
+ text = inspect.cleandoc(self.help).partition("\f")[0]
+ else:
+ text = ""
+
+ if self.deprecated:
+ deprecated_message = (
+ f"(DEPRECATED: {self.deprecated})"
+ if isinstance(self.deprecated, str)
+ else "(DEPRECATED)"
+ )
+ text = _("{text} {deprecated_message}").format(
+ text=text, deprecated_message=deprecated_message
+ )
+
+ if text:
+ formatter.write_paragraph()
+
+ with formatter.indentation():
+ formatter.write_text(text)
+
+ def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Writes all the options into the formatter if they exist."""
+ opts = []
+ for param in self.get_params(ctx):
+ rv = param.get_help_record(ctx)
+ if rv is not None:
+ opts.append(rv)
+
+ if opts:
+ with formatter.section(_("Options")):
+ formatter.write_dl(opts)
+
+ def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Writes the epilog into the formatter if it exists."""
+ if self.epilog:
+ epilog = inspect.cleandoc(self.epilog)
+ formatter.write_paragraph()
+
+ with formatter.indentation():
+ formatter.write_text(epilog)
+
+ def make_context(
+ self,
+ info_name: str | None,
+ args: list[str],
+ parent: Context | None = None,
+ **extra: t.Any,
+ ) -> Context:
+ """This function when given an info name and arguments will kick
+ off the parsing and create a new :class:`Context`. It does not
+ invoke the actual command callback though.
+
+ To quickly customize the context class used without overriding
+ this method, set the :attr:`context_class` attribute.
+
+ :param info_name: the info name for this invocation. Generally this
+ is the most descriptive name for the script or
+ command. For the toplevel script it's usually
+ the name of the script, for commands below it's
+ the name of the command.
+ :param args: the arguments to parse as list of strings.
+ :param parent: the parent context if available.
+ :param extra: extra keyword arguments forwarded to the context
+ constructor.
+
+ .. versionchanged:: 8.0
+ Added the :attr:`context_class` attribute.
+ """
+ for key, value in self.context_settings.items():
+ if key not in extra:
+ extra[key] = value
+
+ ctx = self.context_class(self, info_name=info_name, parent=parent, **extra)
+
+ with ctx.scope(cleanup=False):
+ self.parse_args(ctx, args)
+ return ctx
+
+ def parse_args(self, ctx: Context, args: list[str]) -> list[str]:
+ if not args and self.no_args_is_help and not ctx.resilient_parsing:
+ raise NoArgsIsHelpError(ctx)
+
+ parser = self.make_parser(ctx)
+ opts, args, param_order = parser.parse_args(args=args)
+
+ for param in iter_params_for_processing(param_order, self.get_params(ctx)):
+ _, args = param.handle_parse_result(ctx, opts, args)
+
+ if args and not ctx.allow_extra_args and not ctx.resilient_parsing:
+ ctx.fail(
+ ngettext(
+ "Got unexpected extra argument ({args})",
+ "Got unexpected extra arguments ({args})",
+ len(args),
+ ).format(args=" ".join(map(str, args)))
+ )
+
+ ctx.args = args
+ ctx._opt_prefixes.update(parser._opt_prefixes)
+ return args
+
+ def invoke(self, ctx: Context) -> t.Any:
+ """Given a context, this invokes the attached callback (if it exists)
+ in the right way.
+ """
+ if self.deprecated:
+ extra_message = (
+ f" {self.deprecated}" if isinstance(self.deprecated, str) else ""
+ )
+ message = _(
+ "DeprecationWarning: The command {name!r} is deprecated.{extra_message}"
+ ).format(name=self.name, extra_message=extra_message)
+ echo(style(message, fg="red"), err=True)
+
+ if self.callback is not None:
+ return ctx.invoke(self.callback, **ctx.params)
+
+ def shell_complete(self, ctx: Context, incomplete: str) -> list[CompletionItem]:
+ """Return a list of completions for the incomplete value. Looks
+ at the names of options and chained multi-commands.
+
+ Any command could be part of a chained multi-command, so sibling
+ commands are valid at any point during command completion.
+
+ :param ctx: Invocation context for this command.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ from click.shell_completion import CompletionItem
+
+ results: list[CompletionItem] = []
+
+ if incomplete and not incomplete[0].isalnum():
+ for param in self.get_params(ctx):
+ if (
+ not isinstance(param, Option)
+ or param.hidden
+ or (
+ not param.multiple
+ and ctx.get_parameter_source(param.name) # type: ignore
+ is ParameterSource.COMMANDLINE
+ )
+ ):
+ continue
+
+ results.extend(
+ CompletionItem(name, help=param.help)
+ for name in [*param.opts, *param.secondary_opts]
+ if name.startswith(incomplete)
+ )
+
+ while ctx.parent is not None:
+ ctx = ctx.parent
+
+ if isinstance(ctx.command, Group) and ctx.command.chain:
+ results.extend(
+ CompletionItem(name, help=command.get_short_help_str())
+ for name, command in _complete_visible_commands(ctx, incomplete)
+ if name not in ctx._protected_args
+ )
+
+ return results
+
+ @t.overload
+ def main(
+ self,
+ args: cabc.Sequence[str] | None = None,
+ prog_name: str | None = None,
+ complete_var: str | None = None,
+ standalone_mode: t.Literal[True] = True,
+ **extra: t.Any,
+ ) -> t.NoReturn: ...
+
+ @t.overload
+ def main(
+ self,
+ args: cabc.Sequence[str] | None = None,
+ prog_name: str | None = None,
+ complete_var: str | None = None,
+ standalone_mode: bool = ...,
+ **extra: t.Any,
+ ) -> t.Any: ...
+
+ def main(
+ self,
+ args: cabc.Sequence[str] | None = None,
+ prog_name: str | None = None,
+ complete_var: str | None = None,
+ standalone_mode: bool = True,
+ windows_expand_args: bool = True,
+ **extra: t.Any,
+ ) -> t.Any:
+ """This is the way to invoke a script with all the bells and
+ whistles as a command line application. This will always terminate
+ the application after a call. If this is not wanted, ``SystemExit``
+ needs to be caught.
+
+ This method is also available by directly calling the instance of
+ a :class:`Command`.
+
+ :param args: the arguments that should be used for parsing. If not
+ provided, ``sys.argv[1:]`` is used.
+ :param prog_name: the program name that should be used. By default
+ the program name is constructed by taking the file
+ name from ``sys.argv[0]``.
+ :param complete_var: the environment variable that controls the
+ bash completion support. The default is
+ ``"__COMPLETE"`` with prog_name in
+ uppercase.
+ :param standalone_mode: the default behavior is to invoke the script
+ in standalone mode. Click will then
+ handle exceptions and convert them into
+ error messages and the function will never
+ return but shut down the interpreter. If
+ this is set to `False` they will be
+ propagated to the caller and the return
+ value of this function is the return value
+ of :meth:`invoke`.
+ :param windows_expand_args: Expand glob patterns, user dir, and
+ env vars in command line args on Windows.
+ :param extra: extra keyword arguments are forwarded to the context
+ constructor. See :class:`Context` for more information.
+
+ .. versionchanged:: 8.0.1
+ Added the ``windows_expand_args`` parameter to allow
+ disabling command line arg expansion on Windows.
+
+ .. versionchanged:: 8.0
+ When taking arguments from ``sys.argv`` on Windows, glob
+ patterns, user dir, and env vars are expanded.
+
+ .. versionchanged:: 3.0
+ Added the ``standalone_mode`` parameter.
+ """
+ if args is None:
+ args = sys.argv[1:]
+
+ if os.name == "nt" and windows_expand_args:
+ args = _expand_args(args)
+ else:
+ args = list(args)
+
+ if prog_name is None:
+ prog_name = _detect_program_name()
+
+ # Process shell completion requests and exit early.
+ self._main_shell_completion(extra, prog_name, complete_var)
+
+ try:
+ try:
+ with self.make_context(prog_name, args, **extra) as ctx:
+ rv = self.invoke(ctx)
+ if not standalone_mode:
+ return rv
+ # it's not safe to `ctx.exit(rv)` here!
+ # note that `rv` may actually contain data like "1" which
+ # has obvious effects
+ # more subtle case: `rv=[None, None]` can come out of
+ # chained commands which all returned `None` -- so it's not
+ # even always obvious that `rv` indicates success/failure
+ # by its truthiness/falsiness
+ ctx.exit()
+ except (EOFError, KeyboardInterrupt) as e:
+ echo(file=sys.stderr)
+ raise Abort() from e
+ except ClickException as e:
+ if not standalone_mode:
+ raise
+ e.show()
+ sys.exit(e.exit_code)
+ except OSError as e:
+ if e.errno == errno.EPIPE:
+ sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout))
+ sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr))
+ sys.exit(1)
+ else:
+ raise
+ except Exit as e:
+ if standalone_mode:
+ sys.exit(e.exit_code)
+ else:
+ # in non-standalone mode, return the exit code
+ # note that this is only reached if `self.invoke` above raises
+ # an Exit explicitly -- thus bypassing the check there which
+ # would return its result
+ # the results of non-standalone execution may therefore be
+ # somewhat ambiguous: if there are codepaths which lead to
+ # `ctx.exit(1)` and to `return 1`, the caller won't be able to
+ # tell the difference between the two
+ return e.exit_code
+ except Abort:
+ if not standalone_mode:
+ raise
+ echo(_("Aborted!"), file=sys.stderr)
+ sys.exit(1)
+
+ def _main_shell_completion(
+ self,
+ ctx_args: cabc.MutableMapping[str, t.Any],
+ prog_name: str,
+ complete_var: str | None = None,
+ ) -> None:
+ """Check if the shell is asking for tab completion, process
+ that, then exit early. Called from :meth:`main` before the
+ program is invoked.
+
+ :param prog_name: Name of the executable in the shell.
+ :param complete_var: Name of the environment variable that holds
+ the completion instruction. Defaults to
+ ``_{PROG_NAME}_COMPLETE``.
+
+ .. versionchanged:: 8.2.0
+ Dots (``.``) in ``prog_name`` are replaced with underscores (``_``).
+ """
+ if complete_var is None:
+ complete_name = prog_name.replace("-", "_").replace(".", "_")
+ complete_var = f"_{complete_name}_COMPLETE".upper()
+
+ instruction = os.environ.get(complete_var)
+
+ if not instruction:
+ return
+
+ from .shell_completion import shell_complete
+
+ rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction)
+ sys.exit(rv)
+
+ def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
+ """Alias for :meth:`main`."""
+ return self.main(*args, **kwargs)
+
+
+class _FakeSubclassCheck(type):
+ def __subclasscheck__(cls, subclass: type) -> bool:
+ return issubclass(subclass, cls.__bases__[0])
+
+ def __instancecheck__(cls, instance: t.Any) -> bool:
+ return isinstance(instance, cls.__bases__[0])
+
+
+class _BaseCommand(Command, metaclass=_FakeSubclassCheck):
+ """
+ .. deprecated:: 8.2
+ Will be removed in Click 9.0. Use ``Command`` instead.
+ """
+
+
+class Group(Command):
+ """A group is a command that nests other commands (or more groups).
+
+ :param name: The name of the group command.
+ :param commands: Map names to :class:`Command` objects. Can be a list, which
+ will use :attr:`Command.name` as the keys.
+ :param invoke_without_command: Invoke the group's callback even if a
+ subcommand is not given.
+ :param no_args_is_help: If no arguments are given, show the group's help and
+ exit. Defaults to the opposite of ``invoke_without_command``.
+ :param subcommand_metavar: How to represent the subcommand argument in help.
+ The default will represent whether ``chain`` is set or not.
+ :param chain: Allow passing more than one subcommand argument. After parsing
+ a command's arguments, if any arguments remain another command will be
+ matched, and so on.
+ :param result_callback: A function to call after the group's and
+ subcommand's callbacks. The value returned by the subcommand is passed.
+ If ``chain`` is enabled, the value will be a list of values returned by
+ all the commands. If ``invoke_without_command`` is enabled, the value
+ will be the value returned by the group's callback, or an empty list if
+ ``chain`` is enabled.
+ :param kwargs: Other arguments passed to :class:`Command`.
+
+ .. versionchanged:: 8.0
+ The ``commands`` argument can be a list of command objects.
+
+ .. versionchanged:: 8.2
+ Merged with and replaces the ``MultiCommand`` base class.
+ """
+
+ allow_extra_args = True
+ allow_interspersed_args = False
+
+ #: If set, this is used by the group's :meth:`command` decorator
+ #: as the default :class:`Command` class. This is useful to make all
+ #: subcommands use a custom command class.
+ #:
+ #: .. versionadded:: 8.0
+ command_class: type[Command] | None = None
+
+ #: If set, this is used by the group's :meth:`group` decorator
+ #: as the default :class:`Group` class. This is useful to make all
+ #: subgroups use a custom group class.
+ #:
+ #: If set to the special value :class:`type` (literally
+ #: ``group_class = type``), this group's class will be used as the
+ #: default class. This makes a custom group class continue to make
+ #: custom groups.
+ #:
+ #: .. versionadded:: 8.0
+ group_class: type[Group] | type[type] | None = None
+ # Literal[type] isn't valid, so use Type[type]
+
+ def __init__(
+ self,
+ name: str | None = None,
+ commands: cabc.MutableMapping[str, Command]
+ | cabc.Sequence[Command]
+ | None = None,
+ invoke_without_command: bool = False,
+ no_args_is_help: bool | None = None,
+ subcommand_metavar: str | None = None,
+ chain: bool = False,
+ result_callback: t.Callable[..., t.Any] | None = None,
+ **kwargs: t.Any,
+ ) -> None:
+ super().__init__(name, **kwargs)
+
+ if commands is None:
+ commands = {}
+ elif isinstance(commands, abc.Sequence):
+ commands = {c.name: c for c in commands if c.name is not None}
+
+ #: The registered subcommands by their exported names.
+ self.commands: cabc.MutableMapping[str, Command] = commands
+
+ if no_args_is_help is None:
+ no_args_is_help = not invoke_without_command
+
+ self.no_args_is_help = no_args_is_help
+ self.invoke_without_command = invoke_without_command
+
+ if subcommand_metavar is None:
+ if chain:
+ subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..."
+ else:
+ subcommand_metavar = "COMMAND [ARGS]..."
+
+ self.subcommand_metavar = subcommand_metavar
+ self.chain = chain
+ # The result callback that is stored. This can be set or
+ # overridden with the :func:`result_callback` decorator.
+ self._result_callback = result_callback
+
+ if self.chain:
+ for param in self.params:
+ if isinstance(param, Argument) and not param.required:
+ raise RuntimeError(
+ "A group in chain mode cannot have optional arguments."
+ )
+
+ def to_info_dict(self, ctx: Context) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict(ctx)
+ commands = {}
+
+ for name in self.list_commands(ctx):
+ command = self.get_command(ctx, name)
+
+ if command is None:
+ continue
+
+ sub_ctx = ctx._make_sub_context(command)
+
+ with sub_ctx.scope(cleanup=False):
+ commands[name] = command.to_info_dict(sub_ctx)
+
+ info_dict.update(commands=commands, chain=self.chain)
+ return info_dict
+
+ def add_command(self, cmd: Command, name: str | None = None) -> None:
+ """Registers another :class:`Command` with this group. If the name
+ is not provided, the name of the command is used.
+ """
+ name = name or cmd.name
+ if name is None:
+ raise TypeError("Command has no name.")
+ _check_nested_chain(self, name, cmd, register=True)
+ self.commands[name] = cmd
+
+ @t.overload
+ def command(self, __func: t.Callable[..., t.Any]) -> Command: ...
+
+ @t.overload
+ def command(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], Command]: ...
+
+ def command(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], Command] | Command:
+ """A shortcut decorator for declaring and attaching a command to
+ the group. This takes the same arguments as :func:`command` and
+ immediately registers the created command with this group by
+ calling :meth:`add_command`.
+
+ To customize the command class used, set the
+ :attr:`command_class` attribute.
+
+ .. versionchanged:: 8.1
+ This decorator can be applied without parentheses.
+
+ .. versionchanged:: 8.0
+ Added the :attr:`command_class` attribute.
+ """
+ from .decorators import command
+
+ func: t.Callable[..., t.Any] | None = None
+
+ if args and callable(args[0]):
+ assert len(args) == 1 and not kwargs, (
+ "Use 'command(**kwargs)(callable)' to provide arguments."
+ )
+ (func,) = args
+ args = ()
+
+ if self.command_class and kwargs.get("cls") is None:
+ kwargs["cls"] = self.command_class
+
+ def decorator(f: t.Callable[..., t.Any]) -> Command:
+ cmd: Command = command(*args, **kwargs)(f)
+ self.add_command(cmd)
+ return cmd
+
+ if func is not None:
+ return decorator(func)
+
+ return decorator
+
+ @t.overload
+ def group(self, __func: t.Callable[..., t.Any]) -> Group: ...
+
+ @t.overload
+ def group(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], Group]: ...
+
+ def group(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], Group] | Group:
+ """A shortcut decorator for declaring and attaching a group to
+ the group. This takes the same arguments as :func:`group` and
+ immediately registers the created group with this group by
+ calling :meth:`add_command`.
+
+ To customize the group class used, set the :attr:`group_class`
+ attribute.
+
+ .. versionchanged:: 8.1
+ This decorator can be applied without parentheses.
+
+ .. versionchanged:: 8.0
+ Added the :attr:`group_class` attribute.
+ """
+ from .decorators import group
+
+ func: t.Callable[..., t.Any] | None = None
+
+ if args and callable(args[0]):
+ assert len(args) == 1 and not kwargs, (
+ "Use 'group(**kwargs)(callable)' to provide arguments."
+ )
+ (func,) = args
+ args = ()
+
+ if self.group_class is not None and kwargs.get("cls") is None:
+ if self.group_class is type:
+ kwargs["cls"] = type(self)
+ else:
+ kwargs["cls"] = self.group_class
+
+ def decorator(f: t.Callable[..., t.Any]) -> Group:
+ cmd: Group = group(*args, **kwargs)(f)
+ self.add_command(cmd)
+ return cmd
+
+ if func is not None:
+ return decorator(func)
+
+ return decorator
+
+ def result_callback(self, replace: bool = False) -> t.Callable[[F], F]:
+ """Adds a result callback to the command. By default if a
+ result callback is already registered this will chain them but
+ this can be disabled with the `replace` parameter. The result
+ callback is invoked with the return value of the subcommand
+ (or the list of return values from all subcommands if chaining
+ is enabled) as well as the parameters as they would be passed
+ to the main callback.
+
+ Example::
+
+ @click.group()
+ @click.option('-i', '--input', default=23)
+ def cli(input):
+ return 42
+
+ @cli.result_callback()
+ def process_result(result, input):
+ return result + input
+
+ :param replace: if set to `True` an already existing result
+ callback will be removed.
+
+ .. versionchanged:: 8.0
+ Renamed from ``resultcallback``.
+
+ .. versionadded:: 3.0
+ """
+
+ def decorator(f: F) -> F:
+ old_callback = self._result_callback
+
+ if old_callback is None or replace:
+ self._result_callback = f
+ return f
+
+ def function(value: t.Any, /, *args: t.Any, **kwargs: t.Any) -> t.Any:
+ inner = old_callback(value, *args, **kwargs)
+ return f(inner, *args, **kwargs)
+
+ self._result_callback = rv = update_wrapper(t.cast(F, function), f)
+ return rv # type: ignore[return-value]
+
+ return decorator
+
+ def get_command(self, ctx: Context, cmd_name: str) -> Command | None:
+ """Given a context and a command name, this returns a :class:`Command`
+ object if it exists or returns ``None``.
+ """
+ return self.commands.get(cmd_name)
+
+ def list_commands(self, ctx: Context) -> list[str]:
+ """Returns a list of subcommand names in the order they should appear."""
+ return sorted(self.commands)
+
+ def collect_usage_pieces(self, ctx: Context) -> list[str]:
+ rv = super().collect_usage_pieces(ctx)
+ rv.append(self.subcommand_metavar)
+ return rv
+
+ def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
+ super().format_options(ctx, formatter)
+ self.format_commands(ctx, formatter)
+
+ def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None:
+ """Extra format methods for multi methods that adds all the commands
+ after the options.
+ """
+ commands = []
+ for subcommand in self.list_commands(ctx):
+ cmd = self.get_command(ctx, subcommand)
+ # What is this, the tool lied about a command. Ignore it
+ if cmd is None:
+ continue
+ if cmd.hidden:
+ continue
+
+ commands.append((subcommand, cmd))
+
+ # allow for 3 times the default spacing
+ if len(commands):
+ limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands)
+
+ rows = []
+ for subcommand, cmd in commands:
+ help = cmd.get_short_help_str(limit)
+ rows.append((subcommand, help))
+
+ if rows:
+ with formatter.section(_("Commands")):
+ formatter.write_dl(rows)
+
+ def parse_args(self, ctx: Context, args: list[str]) -> list[str]:
+ if not args and self.no_args_is_help and not ctx.resilient_parsing:
+ raise NoArgsIsHelpError(ctx)
+
+ rest = super().parse_args(ctx, args)
+
+ if self.chain:
+ ctx._protected_args = rest
+ ctx.args = []
+ elif rest:
+ ctx._protected_args, ctx.args = rest[:1], rest[1:]
+
+ return ctx.args
+
+ def invoke(self, ctx: Context) -> t.Any:
+ def _process_result(value: t.Any) -> t.Any:
+ if self._result_callback is not None:
+ value = ctx.invoke(self._result_callback, value, **ctx.params)
+ return value
+
+ if not ctx._protected_args:
+ if self.invoke_without_command:
+ # No subcommand was invoked, so the result callback is
+ # invoked with the group return value for regular
+ # groups, or an empty list for chained groups.
+ with ctx:
+ rv = super().invoke(ctx)
+ return _process_result([] if self.chain else rv)
+ ctx.fail(_("Missing command."))
+
+ # Fetch args back out
+ args = [*ctx._protected_args, *ctx.args]
+ ctx.args = []
+ ctx._protected_args = []
+
+ # If we're not in chain mode, we only allow the invocation of a
+ # single command but we also inform the current context about the
+ # name of the command to invoke.
+ if not self.chain:
+ # Make sure the context is entered so we do not clean up
+ # resources until the result processor has worked.
+ with ctx:
+ cmd_name, cmd, args = self.resolve_command(ctx, args)
+ assert cmd is not None
+ ctx.invoked_subcommand = cmd_name
+ super().invoke(ctx)
+ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx)
+ with sub_ctx:
+ return _process_result(sub_ctx.command.invoke(sub_ctx))
+
+ # In chain mode we create the contexts step by step, but after the
+ # base command has been invoked. Because at that point we do not
+ # know the subcommands yet, the invoked subcommand attribute is
+ # set to ``*`` to inform the command that subcommands are executed
+ # but nothing else.
+ with ctx:
+ ctx.invoked_subcommand = "*" if args else None
+ super().invoke(ctx)
+
+ # Otherwise we make every single context and invoke them in a
+ # chain. In that case the return value to the result processor
+ # is the list of all invoked subcommand's results.
+ contexts = []
+ while args:
+ cmd_name, cmd, args = self.resolve_command(ctx, args)
+ assert cmd is not None
+ sub_ctx = cmd.make_context(
+ cmd_name,
+ args,
+ parent=ctx,
+ allow_extra_args=True,
+ allow_interspersed_args=False,
+ )
+ contexts.append(sub_ctx)
+ args, sub_ctx.args = sub_ctx.args, []
+
+ rv = []
+ for sub_ctx in contexts:
+ with sub_ctx:
+ rv.append(sub_ctx.command.invoke(sub_ctx))
+ return _process_result(rv)
+
+ def resolve_command(
+ self, ctx: Context, args: list[str]
+ ) -> tuple[str | None, Command | None, list[str]]:
+ cmd_name = make_str(args[0])
+ original_cmd_name = cmd_name
+
+ # Get the command
+ cmd = self.get_command(ctx, cmd_name)
+
+ # If we can't find the command but there is a normalization
+ # function available, we try with that one.
+ if cmd is None and ctx.token_normalize_func is not None:
+ cmd_name = ctx.token_normalize_func(cmd_name)
+ cmd = self.get_command(ctx, cmd_name)
+
+ # If we don't find the command we want to show an error message
+ # to the user that it was not provided. However, there is
+ # something else we should do: if the first argument looks like
+ # an option we want to kick off parsing again for arguments to
+ # resolve things like --help which now should go to the main
+ # place.
+ if cmd is None and not ctx.resilient_parsing:
+ if _split_opt(cmd_name)[0]:
+ self.parse_args(ctx, args)
+ ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name))
+ return cmd_name if cmd else None, cmd, args[1:]
+
+ def shell_complete(self, ctx: Context, incomplete: str) -> list[CompletionItem]:
+ """Return a list of completions for the incomplete value. Looks
+ at the names of options, subcommands, and chained
+ multi-commands.
+
+ :param ctx: Invocation context for this command.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ from click.shell_completion import CompletionItem
+
+ results = [
+ CompletionItem(name, help=command.get_short_help_str())
+ for name, command in _complete_visible_commands(ctx, incomplete)
+ ]
+ results.extend(super().shell_complete(ctx, incomplete))
+ return results
+
+
+class _MultiCommand(Group, metaclass=_FakeSubclassCheck):
+ """
+ .. deprecated:: 8.2
+ Will be removed in Click 9.0. Use ``Group`` instead.
+ """
+
+
+class CommandCollection(Group):
+ """A :class:`Group` that looks up subcommands on other groups. If a command
+ is not found on this group, each registered source is checked in order.
+ Parameters on a source are not added to this group, and a source's callback
+ is not invoked when invoking its commands. In other words, this "flattens"
+ commands in many groups into this one group.
+
+ :param name: The name of the group command.
+ :param sources: A list of :class:`Group` objects to look up commands from.
+ :param kwargs: Other arguments passed to :class:`Group`.
+
+ .. versionchanged:: 8.2
+ This is a subclass of ``Group``. Commands are looked up first on this
+ group, then each of its sources.
+ """
+
+ def __init__(
+ self,
+ name: str | None = None,
+ sources: list[Group] | None = None,
+ **kwargs: t.Any,
+ ) -> None:
+ super().__init__(name, **kwargs)
+ #: The list of registered groups.
+ self.sources: list[Group] = sources or []
+
+ def add_source(self, group: Group) -> None:
+ """Add a group as a source of commands."""
+ self.sources.append(group)
+
+ def get_command(self, ctx: Context, cmd_name: str) -> Command | None:
+ rv = super().get_command(ctx, cmd_name)
+
+ if rv is not None:
+ return rv
+
+ for source in self.sources:
+ rv = source.get_command(ctx, cmd_name)
+
+ if rv is not None:
+ if self.chain:
+ _check_nested_chain(self, cmd_name, rv)
+
+ return rv
+
+ return None
+
+ def list_commands(self, ctx: Context) -> list[str]:
+ rv: set[str] = set(super().list_commands(ctx))
+
+ for source in self.sources:
+ rv.update(source.list_commands(ctx))
+
+ return sorted(rv)
+
+
+def _check_iter(value: t.Any) -> cabc.Iterator[t.Any]:
+ """Check if the value is iterable but not a string. Raises a type
+ error, or return an iterator over the value.
+ """
+ if isinstance(value, str):
+ raise TypeError
+
+ return iter(value)
+
+
+class Parameter:
+ r"""A parameter to a command comes in two versions: they are either
+ :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently
+ not supported by design as some of the internals for parsing are
+ intentionally not finalized.
+
+ Some settings are supported by both options and arguments.
+
+ :param param_decls: the parameter declarations for this option or
+ argument. This is a list of flags or argument
+ names.
+ :param type: the type that should be used. Either a :class:`ParamType`
+ or a Python type. The latter is converted into the former
+ automatically if supported.
+ :param required: controls if this is optional or not.
+ :param default: the default value if omitted. This can also be a callable,
+ in which case it's invoked when the default is needed
+ without any arguments.
+ :param callback: A function to further process or validate the value
+ after type conversion. It is called as ``f(ctx, param, value)``
+ and must return the value. It is called for all sources,
+ including prompts.
+ :param nargs: the number of arguments to match. If not ``1`` the return
+ value is a tuple instead of single value. The default for
+ nargs is ``1`` (except if the type is a tuple, then it's
+ the arity of the tuple). If ``nargs=-1``, all remaining
+ parameters are collected.
+ :param metavar: how the value is represented in the help page.
+ :param expose_value: if this is `True` then the value is passed onwards
+ to the command callback and stored on the context,
+ otherwise it's skipped.
+ :param is_eager: eager values are processed before non eager ones. This
+ should not be set for arguments or it will inverse the
+ order of processing.
+ :param envvar: environment variable(s) that are used to provide a default value for
+ this parameter. This can be a string or a sequence of strings. If a sequence is
+ given, only the first non-empty environment variable is used for the parameter.
+ :param shell_complete: A function that returns custom shell
+ completions. Used instead of the param's type completion if
+ given. Takes ``ctx, param, incomplete`` and must return a list
+ of :class:`~click.shell_completion.CompletionItem` or a list of
+ strings.
+ :param deprecated: If ``True`` or non-empty string, issues a message
+ indicating that the argument is deprecated and highlights
+ its deprecation in --help. The message can be customized
+ by using a string as the value. A deprecated parameter
+ cannot be required, a ValueError will be raised otherwise.
+
+ .. versionchanged:: 8.2.0
+ Introduction of ``deprecated``.
+
+ .. versionchanged:: 8.2
+ Adding duplicate parameter names to a :class:`~click.core.Command` will
+ result in a ``UserWarning`` being shown.
+
+ .. versionchanged:: 8.2
+ Adding duplicate parameter names to a :class:`~click.core.Command` will
+ result in a ``UserWarning`` being shown.
+
+ .. versionchanged:: 8.0
+ ``process_value`` validates required parameters and bounded
+ ``nargs``, and invokes the parameter callback before returning
+ the value. This allows the callback to validate prompts.
+ ``full_process_value`` is removed.
+
+ .. versionchanged:: 8.0
+ ``autocompletion`` is renamed to ``shell_complete`` and has new
+ semantics described above. The old name is deprecated and will
+ be removed in 8.1, until then it will be wrapped to match the
+ new requirements.
+
+ .. versionchanged:: 8.0
+ For ``multiple=True, nargs>1``, the default must be a list of
+ tuples.
+
+ .. versionchanged:: 8.0
+ Setting a default is no longer required for ``nargs>1``, it will
+ default to ``None``. ``multiple=True`` or ``nargs=-1`` will
+ default to ``()``.
+
+ .. versionchanged:: 7.1
+ Empty environment variables are ignored rather than taking the
+ empty string value. This makes it possible for scripts to clear
+ variables if they can't unset them.
+
+ .. versionchanged:: 2.0
+ Changed signature for parameter callback to also be passed the
+ parameter. The old callback format will still work, but it will
+ raise a warning to give you a chance to migrate the code easier.
+ """
+
+ param_type_name = "parameter"
+
+ def __init__(
+ self,
+ param_decls: cabc.Sequence[str] | None = None,
+ type: types.ParamType | t.Any | None = None,
+ required: bool = False,
+ # XXX The default historically embed two concepts:
+ # - the declaration of a Parameter object carrying the default (handy to
+ # arbitrage the default value of coupled Parameters sharing the same
+ # self.name, like flag options),
+ # - and the actual value of the default.
+ # It is confusing and is the source of many issues discussed in:
+ # https://github.com/pallets/click/pull/3030
+ # In the future, we might think of splitting it in two, not unlike
+ # Option.is_flag and Option.flag_value: we could have something like
+ # Parameter.is_default and Parameter.default_value.
+ default: t.Any | t.Callable[[], t.Any] | None = UNSET,
+ callback: t.Callable[[Context, Parameter, t.Any], t.Any] | None = None,
+ nargs: int | None = None,
+ multiple: bool = False,
+ metavar: str | None = None,
+ expose_value: bool = True,
+ is_eager: bool = False,
+ envvar: str | cabc.Sequence[str] | None = None,
+ shell_complete: t.Callable[
+ [Context, Parameter, str], list[CompletionItem] | list[str]
+ ]
+ | None = None,
+ deprecated: bool | str = False,
+ ) -> None:
+ self.name: str | None
+ self.opts: list[str]
+ self.secondary_opts: list[str]
+ self.name, self.opts, self.secondary_opts = self._parse_decls(
+ param_decls or (), expose_value
+ )
+ self.type: types.ParamType = types.convert_type(type, default)
+
+ # Default nargs to what the type tells us if we have that
+ # information available.
+ if nargs is None:
+ if self.type.is_composite:
+ nargs = self.type.arity
+ else:
+ nargs = 1
+
+ self.required = required
+ self.callback = callback
+ self.nargs = nargs
+ self.multiple = multiple
+ self.expose_value = expose_value
+ self.default = default
+ self.is_eager = is_eager
+ self.metavar = metavar
+ self.envvar = envvar
+ self._custom_shell_complete = shell_complete
+ self.deprecated = deprecated
+
+ if __debug__:
+ if self.type.is_composite and nargs != self.type.arity:
+ raise ValueError(
+ f"'nargs' must be {self.type.arity} (or None) for"
+ f" type {self.type!r}, but it was {nargs}."
+ )
+
+ if required and deprecated:
+ raise ValueError(
+ f"The {self.param_type_name} '{self.human_readable_name}' "
+ "is deprecated and still required. A deprecated "
+ f"{self.param_type_name} cannot be required."
+ )
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ """Gather information that could be useful for a tool generating
+ user-facing documentation.
+
+ Use :meth:`click.Context.to_info_dict` to traverse the entire
+ CLI structure.
+
+ .. versionchanged:: 8.3.0
+ Returns ``None`` for the :attr:`default` if it was not set.
+
+ .. versionadded:: 8.0
+ """
+ return {
+ "name": self.name,
+ "param_type_name": self.param_type_name,
+ "opts": self.opts,
+ "secondary_opts": self.secondary_opts,
+ "type": self.type.to_info_dict(),
+ "required": self.required,
+ "nargs": self.nargs,
+ "multiple": self.multiple,
+ # We explicitly hide the :attr:`UNSET` value to the user, as we choose to
+ # make it an implementation detail. And because ``to_info_dict`` has been
+ # designed for documentation purposes, we return ``None`` instead.
+ "default": self.default if self.default is not UNSET else None,
+ "envvar": self.envvar,
+ }
+
+ def __repr__(self) -> str:
+ return f"<{self.__class__.__name__} {self.name}>"
+
+ def _parse_decls(
+ self, decls: cabc.Sequence[str], expose_value: bool
+ ) -> tuple[str | None, list[str], list[str]]:
+ raise NotImplementedError()
+
+ @property
+ def human_readable_name(self) -> str:
+ """Returns the human readable name of this parameter. This is the
+ same as the name for options, but the metavar for arguments.
+ """
+ return self.name # type: ignore
+
+ def make_metavar(self, ctx: Context) -> str:
+ if self.metavar is not None:
+ return self.metavar
+
+ metavar = self.type.get_metavar(param=self, ctx=ctx)
+
+ if metavar is None:
+ metavar = self.type.name.upper()
+
+ if self.nargs != 1:
+ metavar += "..."
+
+ return metavar
+
+ @t.overload
+ def get_default(
+ self, ctx: Context, call: t.Literal[True] = True
+ ) -> t.Any | None: ...
+
+ @t.overload
+ def get_default(
+ self, ctx: Context, call: bool = ...
+ ) -> t.Any | t.Callable[[], t.Any] | None: ...
+
+ def get_default(
+ self, ctx: Context, call: bool = True
+ ) -> t.Any | t.Callable[[], t.Any] | None:
+ """Get the default for the parameter. Tries
+ :meth:`Context.lookup_default` first, then the local default.
+
+ :param ctx: Current context.
+ :param call: If the default is a callable, call it. Disable to
+ return the callable instead.
+
+ .. versionchanged:: 8.0.2
+ Type casting is no longer performed when getting a default.
+
+ .. versionchanged:: 8.0.1
+ Type casting can fail in resilient parsing mode. Invalid
+ defaults will not prevent showing help text.
+
+ .. versionchanged:: 8.0
+ Looks at ``ctx.default_map`` first.
+
+ .. versionchanged:: 8.0
+ Added the ``call`` parameter.
+ """
+ value = ctx.lookup_default(self.name, call=False) # type: ignore
+
+ if value is UNSET:
+ value = self.default
+
+ if call and callable(value):
+ value = value()
+
+ return value
+
+ def add_to_parser(self, parser: _OptionParser, ctx: Context) -> None:
+ raise NotImplementedError()
+
+ def consume_value(
+ self, ctx: Context, opts: cabc.Mapping[str, t.Any]
+ ) -> tuple[t.Any, ParameterSource]:
+ """Returns the parameter value produced by the parser.
+
+ If the parser did not produce a value from user input, the value is either
+ sourced from the environment variable, the default map, or the parameter's
+ default value. In that order of precedence.
+
+ If no value is found, an internal sentinel value is returned.
+
+ :meta private:
+ """
+ # Collect from the parse the value passed by the user to the CLI.
+ value = opts.get(self.name, UNSET) # type: ignore
+ # If the value is set, it means it was sourced from the command line by the
+ # parser, otherwise it left unset by default.
+ source = (
+ ParameterSource.COMMANDLINE
+ if value is not UNSET
+ else ParameterSource.DEFAULT
+ )
+
+ if value is UNSET:
+ envvar_value = self.value_from_envvar(ctx)
+ if envvar_value is not None:
+ value = envvar_value
+ source = ParameterSource.ENVIRONMENT
+
+ if value is UNSET:
+ default_map_value = ctx.lookup_default(self.name) # type: ignore
+ if default_map_value is not UNSET:
+ value = default_map_value
+ source = ParameterSource.DEFAULT_MAP
+
+ if value is UNSET:
+ default_value = self.get_default(ctx)
+ if default_value is not UNSET:
+ value = default_value
+ source = ParameterSource.DEFAULT
+
+ return value, source
+
+ def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any:
+ """Convert and validate a value against the parameter's
+ :attr:`type`, :attr:`multiple`, and :attr:`nargs`.
+ """
+ if value in (None, UNSET):
+ if self.multiple or self.nargs == -1:
+ return ()
+ else:
+ return value
+
+ def check_iter(value: t.Any) -> cabc.Iterator[t.Any]:
+ try:
+ return _check_iter(value)
+ except TypeError:
+ # This should only happen when passing in args manually,
+ # the parser should construct an iterable when parsing
+ # the command line.
+ raise BadParameter(
+ _("Value must be an iterable."), ctx=ctx, param=self
+ ) from None
+
+ # Define the conversion function based on nargs and type.
+
+ if self.nargs == 1 or self.type.is_composite:
+
+ def convert(value: t.Any) -> t.Any:
+ return self.type(value, param=self, ctx=ctx)
+
+ elif self.nargs == -1:
+
+ def convert(value: t.Any) -> t.Any: # tuple[t.Any, ...]
+ return tuple(self.type(x, self, ctx) for x in check_iter(value))
+
+ else: # nargs > 1
+
+ def convert(value: t.Any) -> t.Any: # tuple[t.Any, ...]
+ value = tuple(check_iter(value))
+
+ if len(value) != self.nargs:
+ raise BadParameter(
+ ngettext(
+ "Takes {nargs} values but 1 was given.",
+ "Takes {nargs} values but {len} were given.",
+ len(value),
+ ).format(nargs=self.nargs, len=len(value)),
+ ctx=ctx,
+ param=self,
+ )
+
+ return tuple(self.type(x, self, ctx) for x in value)
+
+ if self.multiple:
+ return tuple(convert(x) for x in check_iter(value))
+
+ return convert(value)
+
+ def value_is_missing(self, value: t.Any) -> bool:
+ """A value is considered missing if:
+
+ - it is :attr:`UNSET`,
+ - or if it is an empty sequence while the parameter is suppose to have
+ non-single value (i.e. :attr:`nargs` is not ``1`` or :attr:`multiple` is
+ set).
+
+ :meta private:
+ """
+ if value is UNSET:
+ return True
+
+ if (self.nargs != 1 or self.multiple) and value == ():
+ return True
+
+ return False
+
+ def process_value(self, ctx: Context, value: t.Any) -> t.Any:
+ """Process the value of this parameter:
+
+ 1. Type cast the value using :meth:`type_cast_value`.
+ 2. Check if the value is missing (see: :meth:`value_is_missing`), and raise
+ :exc:`MissingParameter` if it is required.
+ 3. If a :attr:`callback` is set, call it to have the value replaced by the
+ result of the callback. If the value was not set, the callback receive
+ ``None``. This keep the legacy behavior as it was before the introduction of
+ the :attr:`UNSET` sentinel.
+
+ :meta private:
+ """
+ value = self.type_cast_value(ctx, value)
+
+ if self.required and self.value_is_missing(value):
+ raise MissingParameter(ctx=ctx, param=self)
+
+ if self.callback is not None:
+ # Legacy case: UNSET is not exposed directly to the callback, but converted
+ # to None.
+ if value is UNSET:
+ value = None
+ value = self.callback(ctx, self, value)
+
+ return value
+
+ def resolve_envvar_value(self, ctx: Context) -> str | None:
+ """Returns the value found in the environment variable(s) attached to this
+ parameter.
+
+ Environment variables values are `always returned as strings
+ `_.
+
+ This method returns ``None`` if:
+
+ - the :attr:`envvar` property is not set on the :class:`Parameter`,
+ - the environment variable is not found in the environment,
+ - the variable is found in the environment but its value is empty (i.e. the
+ environment variable is present but has an empty string).
+
+ If :attr:`envvar` is setup with multiple environment variables,
+ then only the first non-empty value is returned.
+
+ .. caution::
+
+ The raw value extracted from the environment is not normalized and is
+ returned as-is. Any normalization or reconciliation is performed later by
+ the :class:`Parameter`'s :attr:`type`.
+
+ :meta private:
+ """
+ if not self.envvar:
+ return None
+
+ if isinstance(self.envvar, str):
+ rv = os.environ.get(self.envvar)
+
+ if rv:
+ return rv
+ else:
+ for envvar in self.envvar:
+ rv = os.environ.get(envvar)
+
+ # Return the first non-empty value of the list of environment variables.
+ if rv:
+ return rv
+ # Else, absence of value is interpreted as an environment variable that
+ # is not set, so proceed to the next one.
+
+ return None
+
+ def value_from_envvar(self, ctx: Context) -> str | cabc.Sequence[str] | None:
+ """Process the raw environment variable string for this parameter.
+
+ Returns the string as-is or splits it into a sequence of strings if the
+ parameter is expecting multiple values (i.e. its :attr:`nargs` property is set
+ to a value other than ``1``).
+
+ :meta private:
+ """
+ rv = self.resolve_envvar_value(ctx)
+
+ if rv is not None and self.nargs != 1:
+ return self.type.split_envvar_value(rv)
+
+ return rv
+
+ def handle_parse_result(
+ self, ctx: Context, opts: cabc.Mapping[str, t.Any], args: list[str]
+ ) -> tuple[t.Any, list[str]]:
+ """Process the value produced by the parser from user input.
+
+ Always process the value through the Parameter's :attr:`type`, wherever it
+ comes from.
+
+ If the parameter is deprecated, this method warn the user about it. But only if
+ the value has been explicitly set by the user (and as such, is not coming from
+ a default).
+
+ :meta private:
+ """
+ with augment_usage_errors(ctx, param=self):
+ value, source = self.consume_value(ctx, opts)
+
+ ctx.set_parameter_source(self.name, source) # type: ignore
+
+ # Display a deprecation warning if necessary.
+ if (
+ self.deprecated
+ and value is not UNSET
+ and source not in (ParameterSource.DEFAULT, ParameterSource.DEFAULT_MAP)
+ ):
+ extra_message = (
+ f" {self.deprecated}" if isinstance(self.deprecated, str) else ""
+ )
+ message = _(
+ "DeprecationWarning: The {param_type} {name!r} is deprecated."
+ "{extra_message}"
+ ).format(
+ param_type=self.param_type_name,
+ name=self.human_readable_name,
+ extra_message=extra_message,
+ )
+ echo(style(message, fg="red"), err=True)
+
+ # Process the value through the parameter's type.
+ try:
+ value = self.process_value(ctx, value)
+ except Exception:
+ if not ctx.resilient_parsing:
+ raise
+ # In resilient parsing mode, we do not want to fail the command if the
+ # value is incompatible with the parameter type, so we reset the value
+ # to UNSET, which will be interpreted as a missing value.
+ value = UNSET
+
+ # Add parameter's value to the context.
+ if (
+ self.expose_value
+ # We skip adding the value if it was previously set by another parameter
+ # targeting the same variable name. This prevents parameters competing for
+ # the same name to override each other.
+ and self.name not in ctx.params
+ ):
+ # Click is logically enforcing that the name is None if the parameter is
+ # not to be exposed. We still assert it here to please the type checker.
+ assert self.name is not None, (
+ f"{self!r} parameter's name should not be None when exposing value."
+ )
+ # Normalize UNSET values to None, as we're about to pass them to the
+ # command function and move them to the pure-Python realm of user-written
+ # code.
+ ctx.params[self.name] = value if value is not UNSET else None
+
+ return value, args
+
+ def get_help_record(self, ctx: Context) -> tuple[str, str] | None:
+ pass
+
+ def get_usage_pieces(self, ctx: Context) -> list[str]:
+ return []
+
+ def get_error_hint(self, ctx: Context) -> str:
+ """Get a stringified version of the param for use in error messages to
+ indicate which param caused the error.
+ """
+ hint_list = self.opts or [self.human_readable_name]
+ return " / ".join(f"'{x}'" for x in hint_list)
+
+ def shell_complete(self, ctx: Context, incomplete: str) -> list[CompletionItem]:
+ """Return a list of completions for the incomplete value. If a
+ ``shell_complete`` function was given during init, it is used.
+ Otherwise, the :attr:`type`
+ :meth:`~click.types.ParamType.shell_complete` function is used.
+
+ :param ctx: Invocation context for this command.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ if self._custom_shell_complete is not None:
+ results = self._custom_shell_complete(ctx, self, incomplete)
+
+ if results and isinstance(results[0], str):
+ from click.shell_completion import CompletionItem
+
+ results = [CompletionItem(c) for c in results]
+
+ return t.cast("list[CompletionItem]", results)
+
+ return self.type.shell_complete(ctx, self, incomplete)
+
+
+class Option(Parameter):
+ """Options are usually optional values on the command line and
+ have some extra features that arguments don't have.
+
+ All other parameters are passed onwards to the parameter constructor.
+
+ :param show_default: Show the default value for this option in its
+ help text. Values are not shown by default, unless
+ :attr:`Context.show_default` is ``True``. If this value is a
+ string, it shows that string in parentheses instead of the
+ actual value. This is particularly useful for dynamic options.
+ For single option boolean flags, the default remains hidden if
+ its value is ``False``.
+ :param show_envvar: Controls if an environment variable should be
+ shown on the help page and error messages.
+ Normally, environment variables are not shown.
+ :param prompt: If set to ``True`` or a non empty string then the
+ user will be prompted for input. If set to ``True`` the prompt
+ will be the option name capitalized. A deprecated option cannot be
+ prompted.
+ :param confirmation_prompt: Prompt a second time to confirm the
+ value if it was prompted for. Can be set to a string instead of
+ ``True`` to customize the message.
+ :param prompt_required: If set to ``False``, the user will be
+ prompted for input only when the option was specified as a flag
+ without a value.
+ :param hide_input: If this is ``True`` then the input on the prompt
+ will be hidden from the user. This is useful for password input.
+ :param is_flag: forces this option to act as a flag. The default is
+ auto detection.
+ :param flag_value: which value should be used for this flag if it's
+ enabled. This is set to a boolean automatically if
+ the option string contains a slash to mark two options.
+ :param multiple: if this is set to `True` then the argument is accepted
+ multiple times and recorded. This is similar to ``nargs``
+ in how it works but supports arbitrary number of
+ arguments.
+ :param count: this flag makes an option increment an integer.
+ :param allow_from_autoenv: if this is enabled then the value of this
+ parameter will be pulled from an environment
+ variable in case a prefix is defined on the
+ context.
+ :param help: the help string.
+ :param hidden: hide this option from help outputs.
+ :param attrs: Other command arguments described in :class:`Parameter`.
+
+ .. versionchanged:: 8.2
+ ``envvar`` used with ``flag_value`` will always use the ``flag_value``,
+ previously it would use the value of the environment variable.
+
+ .. versionchanged:: 8.1
+ Help text indentation is cleaned here instead of only in the
+ ``@option`` decorator.
+
+ .. versionchanged:: 8.1
+ The ``show_default`` parameter overrides
+ ``Context.show_default``.
+
+ .. versionchanged:: 8.1
+ The default of a single option boolean flag is not shown if the
+ default value is ``False``.
+
+ .. versionchanged:: 8.0.1
+ ``type`` is detected from ``flag_value`` if given.
+ """
+
+ param_type_name = "option"
+
+ def __init__(
+ self,
+ param_decls: cabc.Sequence[str] | None = None,
+ show_default: bool | str | None = None,
+ prompt: bool | str = False,
+ confirmation_prompt: bool | str = False,
+ prompt_required: bool = True,
+ hide_input: bool = False,
+ is_flag: bool | None = None,
+ flag_value: t.Any = UNSET,
+ multiple: bool = False,
+ count: bool = False,
+ allow_from_autoenv: bool = True,
+ type: types.ParamType | t.Any | None = None,
+ help: str | None = None,
+ hidden: bool = False,
+ show_choices: bool = True,
+ show_envvar: bool = False,
+ deprecated: bool | str = False,
+ **attrs: t.Any,
+ ) -> None:
+ if help:
+ help = inspect.cleandoc(help)
+
+ super().__init__(
+ param_decls, type=type, multiple=multiple, deprecated=deprecated, **attrs
+ )
+
+ if prompt is True:
+ if self.name is None:
+ raise TypeError("'name' is required with 'prompt=True'.")
+
+ prompt_text: str | None = self.name.replace("_", " ").capitalize()
+ elif prompt is False:
+ prompt_text = None
+ else:
+ prompt_text = prompt
+
+ if deprecated:
+ deprecated_message = (
+ f"(DEPRECATED: {deprecated})"
+ if isinstance(deprecated, str)
+ else "(DEPRECATED)"
+ )
+ help = help + deprecated_message if help is not None else deprecated_message
+
+ self.prompt = prompt_text
+ self.confirmation_prompt = confirmation_prompt
+ self.prompt_required = prompt_required
+ self.hide_input = hide_input
+ self.hidden = hidden
+
+ # The _flag_needs_value property tells the parser that this option is a flag
+ # that cannot be used standalone and needs a value. With this information, the
+ # parser can determine whether to consider the next user-provided argument in
+ # the CLI as a value for this flag or as a new option.
+ # If prompt is enabled but not required, then it opens the possibility for the
+ # option to gets its value from the user.
+ self._flag_needs_value = self.prompt is not None and not self.prompt_required
+
+ # Auto-detect if this is a flag or not.
+ if is_flag is None:
+ # Implicitly a flag because flag_value was set.
+ if flag_value is not UNSET:
+ is_flag = True
+ # Not a flag, but when used as a flag it shows a prompt.
+ elif self._flag_needs_value:
+ is_flag = False
+ # Implicitly a flag because secondary options names were given.
+ elif self.secondary_opts:
+ is_flag = True
+ # The option is explicitly not a flag. But we do not know yet if it needs a
+ # value or not. So we look at the default value to determine it.
+ elif is_flag is False and not self._flag_needs_value:
+ self._flag_needs_value = self.default is UNSET
+
+ if is_flag:
+ # Set missing default for flags if not explicitly required or prompted.
+ if self.default is UNSET and not self.required and not self.prompt:
+ if multiple:
+ self.default = ()
+
+ # Auto-detect the type of the flag based on the flag_value.
+ if type is None:
+ # A flag without a flag_value is a boolean flag.
+ if flag_value is UNSET:
+ self.type = types.BoolParamType()
+ # If the flag value is a boolean, use BoolParamType.
+ elif isinstance(flag_value, bool):
+ self.type = types.BoolParamType()
+ # Otherwise, guess the type from the flag value.
+ else:
+ self.type = types.convert_type(None, flag_value)
+
+ self.is_flag: bool = bool(is_flag)
+ self.is_bool_flag: bool = bool(
+ is_flag and isinstance(self.type, types.BoolParamType)
+ )
+ self.flag_value: t.Any = flag_value
+
+ # Set boolean flag default to False if unset and not required.
+ if self.is_bool_flag:
+ if self.default is UNSET and not self.required:
+ self.default = False
+
+ # Support the special case of aligning the default value with the flag_value
+ # for flags whose default is explicitly set to True. Note that as long as we
+ # have this condition, there is no way a flag can have a default set to True,
+ # and a flag_value set to something else. Refs:
+ # https://github.com/pallets/click/issues/3024#issuecomment-3146199461
+ # https://github.com/pallets/click/pull/3030/commits/06847da
+ if self.default is True and self.flag_value is not UNSET:
+ self.default = self.flag_value
+
+ # Set the default flag_value if it is not set.
+ if self.flag_value is UNSET:
+ if self.is_flag:
+ self.flag_value = True
+ else:
+ self.flag_value = None
+
+ # Counting.
+ self.count = count
+ if count:
+ if type is None:
+ self.type = types.IntRange(min=0)
+ if self.default is UNSET:
+ self.default = 0
+
+ self.allow_from_autoenv = allow_from_autoenv
+ self.help = help
+ self.show_default = show_default
+ self.show_choices = show_choices
+ self.show_envvar = show_envvar
+
+ if __debug__:
+ if deprecated and prompt:
+ raise ValueError("`deprecated` options cannot use `prompt`.")
+
+ if self.nargs == -1:
+ raise TypeError("nargs=-1 is not supported for options.")
+
+ if not self.is_bool_flag and self.secondary_opts:
+ raise TypeError("Secondary flag is not valid for non-boolean flag.")
+
+ if self.is_bool_flag and self.hide_input and self.prompt is not None:
+ raise TypeError(
+ "'prompt' with 'hide_input' is not valid for boolean flag."
+ )
+
+ if self.count:
+ if self.multiple:
+ raise TypeError("'count' is not valid with 'multiple'.")
+
+ if self.is_flag:
+ raise TypeError("'count' is not valid with 'is_flag'.")
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ """
+ .. versionchanged:: 8.3.0
+ Returns ``None`` for the :attr:`flag_value` if it was not set.
+ """
+ info_dict = super().to_info_dict()
+ info_dict.update(
+ help=self.help,
+ prompt=self.prompt,
+ is_flag=self.is_flag,
+ # We explicitly hide the :attr:`UNSET` value to the user, as we choose to
+ # make it an implementation detail. And because ``to_info_dict`` has been
+ # designed for documentation purposes, we return ``None`` instead.
+ flag_value=self.flag_value if self.flag_value is not UNSET else None,
+ count=self.count,
+ hidden=self.hidden,
+ )
+ return info_dict
+
+ def get_error_hint(self, ctx: Context) -> str:
+ result = super().get_error_hint(ctx)
+ if self.show_envvar and self.envvar is not None:
+ result += f" (env var: '{self.envvar}')"
+ return result
+
+ def _parse_decls(
+ self, decls: cabc.Sequence[str], expose_value: bool
+ ) -> tuple[str | None, list[str], list[str]]:
+ opts = []
+ secondary_opts = []
+ name = None
+ possible_names = []
+
+ for decl in decls:
+ if decl.isidentifier():
+ if name is not None:
+ raise TypeError(f"Name '{name}' defined twice")
+ name = decl
+ else:
+ split_char = ";" if decl[:1] == "/" else "/"
+ if split_char in decl:
+ first, second = decl.split(split_char, 1)
+ first = first.rstrip()
+ if first:
+ possible_names.append(_split_opt(first))
+ opts.append(first)
+ second = second.lstrip()
+ if second:
+ secondary_opts.append(second.lstrip())
+ if first == second:
+ raise ValueError(
+ f"Boolean option {decl!r} cannot use the"
+ " same flag for true/false."
+ )
+ else:
+ possible_names.append(_split_opt(decl))
+ opts.append(decl)
+
+ if name is None and possible_names:
+ possible_names.sort(key=lambda x: -len(x[0])) # group long options first
+ name = possible_names[0][1].replace("-", "_").lower()
+ if not name.isidentifier():
+ name = None
+
+ if name is None:
+ if not expose_value:
+ return None, opts, secondary_opts
+ raise TypeError(
+ f"Could not determine name for option with declarations {decls!r}"
+ )
+
+ if not opts and not secondary_opts:
+ raise TypeError(
+ f"No options defined but a name was passed ({name})."
+ " Did you mean to declare an argument instead? Did"
+ f" you mean to pass '--{name}'?"
+ )
+
+ return name, opts, secondary_opts
+
+ def add_to_parser(self, parser: _OptionParser, ctx: Context) -> None:
+ if self.multiple:
+ action = "append"
+ elif self.count:
+ action = "count"
+ else:
+ action = "store"
+
+ if self.is_flag:
+ action = f"{action}_const"
+
+ if self.is_bool_flag and self.secondary_opts:
+ parser.add_option(
+ obj=self, opts=self.opts, dest=self.name, action=action, const=True
+ )
+ parser.add_option(
+ obj=self,
+ opts=self.secondary_opts,
+ dest=self.name,
+ action=action,
+ const=False,
+ )
+ else:
+ parser.add_option(
+ obj=self,
+ opts=self.opts,
+ dest=self.name,
+ action=action,
+ const=self.flag_value,
+ )
+ else:
+ parser.add_option(
+ obj=self,
+ opts=self.opts,
+ dest=self.name,
+ action=action,
+ nargs=self.nargs,
+ )
+
+ def get_help_record(self, ctx: Context) -> tuple[str, str] | None:
+ if self.hidden:
+ return None
+
+ any_prefix_is_slash = False
+
+ def _write_opts(opts: cabc.Sequence[str]) -> str:
+ nonlocal any_prefix_is_slash
+
+ rv, any_slashes = join_options(opts)
+
+ if any_slashes:
+ any_prefix_is_slash = True
+
+ if not self.is_flag and not self.count:
+ rv += f" {self.make_metavar(ctx=ctx)}"
+
+ return rv
+
+ rv = [_write_opts(self.opts)]
+
+ if self.secondary_opts:
+ rv.append(_write_opts(self.secondary_opts))
+
+ help = self.help or ""
+
+ extra = self.get_help_extra(ctx)
+ extra_items = []
+ if "envvars" in extra:
+ extra_items.append(
+ _("env var: {var}").format(var=", ".join(extra["envvars"]))
+ )
+ if "default" in extra:
+ extra_items.append(_("default: {default}").format(default=extra["default"]))
+ if "range" in extra:
+ extra_items.append(extra["range"])
+ if "required" in extra:
+ extra_items.append(_(extra["required"]))
+
+ if extra_items:
+ extra_str = "; ".join(extra_items)
+ help = f"{help} [{extra_str}]" if help else f"[{extra_str}]"
+
+ return ("; " if any_prefix_is_slash else " / ").join(rv), help
+
+ def get_help_extra(self, ctx: Context) -> types.OptionHelpExtra:
+ extra: types.OptionHelpExtra = {}
+
+ if self.show_envvar:
+ envvar = self.envvar
+
+ if envvar is None:
+ if (
+ self.allow_from_autoenv
+ and ctx.auto_envvar_prefix is not None
+ and self.name is not None
+ ):
+ envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
+
+ if envvar is not None:
+ if isinstance(envvar, str):
+ extra["envvars"] = (envvar,)
+ else:
+ extra["envvars"] = tuple(str(d) for d in envvar)
+
+ # Temporarily enable resilient parsing to avoid type casting
+ # failing for the default. Might be possible to extend this to
+ # help formatting in general.
+ resilient = ctx.resilient_parsing
+ ctx.resilient_parsing = True
+
+ try:
+ default_value = self.get_default(ctx, call=False)
+ finally:
+ ctx.resilient_parsing = resilient
+
+ show_default = False
+ show_default_is_str = False
+
+ if self.show_default is not None:
+ if isinstance(self.show_default, str):
+ show_default_is_str = show_default = True
+ else:
+ show_default = self.show_default
+ elif ctx.show_default is not None:
+ show_default = ctx.show_default
+
+ if show_default_is_str or (
+ show_default and (default_value not in (None, UNSET))
+ ):
+ if show_default_is_str:
+ default_string = f"({self.show_default})"
+ elif isinstance(default_value, (list, tuple)):
+ default_string = ", ".join(str(d) for d in default_value)
+ elif isinstance(default_value, enum.Enum):
+ default_string = default_value.name
+ elif inspect.isfunction(default_value):
+ default_string = _("(dynamic)")
+ elif self.is_bool_flag and self.secondary_opts:
+ # For boolean flags that have distinct True/False opts,
+ # use the opt without prefix instead of the value.
+ default_string = _split_opt(
+ (self.opts if default_value else self.secondary_opts)[0]
+ )[1]
+ elif self.is_bool_flag and not self.secondary_opts and not default_value:
+ default_string = ""
+ elif default_value == "":
+ default_string = '""'
+ else:
+ default_string = str(default_value)
+
+ if default_string:
+ extra["default"] = default_string
+
+ if (
+ isinstance(self.type, types._NumberRangeBase)
+ # skip count with default range type
+ and not (self.count and self.type.min == 0 and self.type.max is None)
+ ):
+ range_str = self.type._describe_range()
+
+ if range_str:
+ extra["range"] = range_str
+
+ if self.required:
+ extra["required"] = "required"
+
+ return extra
+
+ def prompt_for_value(self, ctx: Context) -> t.Any:
+ """This is an alternative flow that can be activated in the full
+ value processing if a value does not exist. It will prompt the
+ user until a valid value exists and then returns the processed
+ value as result.
+ """
+ assert self.prompt is not None
+
+ # Calculate the default before prompting anything to lock in the value before
+ # attempting any user interaction.
+ default = self.get_default(ctx)
+
+ # A boolean flag can use a simplified [y/n] confirmation prompt.
+ if self.is_bool_flag:
+ # If we have no boolean default, we force the user to explicitly provide
+ # one.
+ if default in (UNSET, None):
+ default = None
+ # Nothing prevent you to declare an option that is simultaneously:
+ # 1) auto-detected as a boolean flag,
+ # 2) allowed to prompt, and
+ # 3) still declare a non-boolean default.
+ # This forced casting into a boolean is necessary to align any non-boolean
+ # default to the prompt, which is going to be a [y/n]-style confirmation
+ # because the option is still a boolean flag. That way, instead of [y/n],
+ # we get [Y/n] or [y/N] depending on the truthy value of the default.
+ # Refs: https://github.com/pallets/click/pull/3030#discussion_r2289180249
+ else:
+ default = bool(default)
+ return confirm(self.prompt, default)
+
+ # If show_default is set to True/False, provide this to `prompt` as well. For
+ # non-bool values of `show_default`, we use `prompt`'s default behavior
+ prompt_kwargs: t.Any = {}
+ if isinstance(self.show_default, bool):
+ prompt_kwargs["show_default"] = self.show_default
+
+ return prompt(
+ self.prompt,
+ # Use ``None`` to inform the prompt() function to reiterate until a valid
+ # value is provided by the user if we have no default.
+ default=None if default is UNSET else default,
+ type=self.type,
+ hide_input=self.hide_input,
+ show_choices=self.show_choices,
+ confirmation_prompt=self.confirmation_prompt,
+ value_proc=lambda x: self.process_value(ctx, x),
+ **prompt_kwargs,
+ )
+
+ def resolve_envvar_value(self, ctx: Context) -> str | None:
+ """:class:`Option` resolves its environment variable the same way as
+ :func:`Parameter.resolve_envvar_value`, but it also supports
+ :attr:`Context.auto_envvar_prefix`. If we could not find an environment from
+ the :attr:`envvar` property, we fallback on :attr:`Context.auto_envvar_prefix`
+ to build dynamiccaly the environment variable name using the
+ :python:`{ctx.auto_envvar_prefix}_{self.name.upper()}` template.
+
+ :meta private:
+ """
+ rv = super().resolve_envvar_value(ctx)
+
+ if rv is not None:
+ return rv
+
+ if (
+ self.allow_from_autoenv
+ and ctx.auto_envvar_prefix is not None
+ and self.name is not None
+ ):
+ envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
+ rv = os.environ.get(envvar)
+
+ if rv:
+ return rv
+
+ return None
+
+ def value_from_envvar(self, ctx: Context) -> t.Any:
+ """For :class:`Option`, this method processes the raw environment variable
+ string the same way as :func:`Parameter.value_from_envvar` does.
+
+ But in the case of non-boolean flags, the value is analyzed to determine if the
+ flag is activated or not, and returns a boolean of its activation, or the
+ :attr:`flag_value` if the latter is set.
+
+ This method also takes care of repeated options (i.e. options with
+ :attr:`multiple` set to ``True``).
+
+ :meta private:
+ """
+ rv = self.resolve_envvar_value(ctx)
+
+ # Absent environment variable or an empty string is interpreted as unset.
+ if rv is None:
+ return None
+
+ # Non-boolean flags are more liberal in what they accept. But a flag being a
+ # flag, its envvar value still needs to be analyzed to determine if the flag is
+ # activated or not.
+ if self.is_flag and not self.is_bool_flag:
+ # If the flag_value is set and match the envvar value, return it
+ # directly.
+ if self.flag_value is not UNSET and rv == self.flag_value:
+ return self.flag_value
+ # Analyze the envvar value as a boolean to know if the flag is
+ # activated or not.
+ return types.BoolParamType.str_to_bool(rv)
+
+ # Split the envvar value if it is allowed to be repeated.
+ value_depth = (self.nargs != 1) + bool(self.multiple)
+ if value_depth > 0:
+ multi_rv = self.type.split_envvar_value(rv)
+ if self.multiple and self.nargs != 1:
+ multi_rv = batch(multi_rv, self.nargs) # type: ignore[assignment]
+
+ return multi_rv
+
+ return rv
+
+ def consume_value(
+ self, ctx: Context, opts: cabc.Mapping[str, Parameter]
+ ) -> tuple[t.Any, ParameterSource]:
+ """For :class:`Option`, the value can be collected from an interactive prompt
+ if the option is a flag that needs a value (and the :attr:`prompt` property is
+ set).
+
+ Additionally, this method handles flag option that are activated without a
+ value, in which case the :attr:`flag_value` is returned.
+
+ :meta private:
+ """
+ value, source = super().consume_value(ctx, opts)
+
+ # The parser will emit a sentinel value if the option is allowed to as a flag
+ # without a value.
+ if value is FLAG_NEEDS_VALUE:
+ # If the option allows for a prompt, we start an interaction with the user.
+ if self.prompt is not None and not ctx.resilient_parsing:
+ value = self.prompt_for_value(ctx)
+ source = ParameterSource.PROMPT
+ # Else the flag takes its flag_value as value.
+ else:
+ value = self.flag_value
+ source = ParameterSource.COMMANDLINE
+
+ # A flag which is activated always returns the flag value, unless the value
+ # comes from the explicitly sets default.
+ elif (
+ self.is_flag
+ and value is True
+ and not self.is_bool_flag
+ and source not in (ParameterSource.DEFAULT, ParameterSource.DEFAULT_MAP)
+ ):
+ value = self.flag_value
+
+ # Re-interpret a multiple option which has been sent as-is by the parser.
+ # Here we replace each occurrence of value-less flags (marked by the
+ # FLAG_NEEDS_VALUE sentinel) with the flag_value.
+ elif (
+ self.multiple
+ and value is not UNSET
+ and source not in (ParameterSource.DEFAULT, ParameterSource.DEFAULT_MAP)
+ and any(v is FLAG_NEEDS_VALUE for v in value)
+ ):
+ value = [self.flag_value if v is FLAG_NEEDS_VALUE else v for v in value]
+ source = ParameterSource.COMMANDLINE
+
+ # The value wasn't set, or used the param's default, prompt for one to the user
+ # if prompting is enabled.
+ elif (
+ (
+ value is UNSET
+ or source in (ParameterSource.DEFAULT, ParameterSource.DEFAULT_MAP)
+ )
+ and self.prompt is not None
+ and (self.required or self.prompt_required)
+ and not ctx.resilient_parsing
+ ):
+ value = self.prompt_for_value(ctx)
+ source = ParameterSource.PROMPT
+
+ return value, source
+
+ def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any:
+ if self.is_flag and not self.required:
+ if value is UNSET:
+ if self.is_bool_flag:
+ # If the flag is a boolean flag, we return False if it is not set.
+ value = False
+ return super().type_cast_value(ctx, value)
+
+
+class Argument(Parameter):
+ """Arguments are positional parameters to a command. They generally
+ provide fewer features than options but can have infinite ``nargs``
+ and are required by default.
+
+ All parameters are passed onwards to the constructor of :class:`Parameter`.
+ """
+
+ param_type_name = "argument"
+
+ def __init__(
+ self,
+ param_decls: cabc.Sequence[str],
+ required: bool | None = None,
+ **attrs: t.Any,
+ ) -> None:
+ # Auto-detect the requirement status of the argument if not explicitly set.
+ if required is None:
+ # The argument gets automatically required if it has no explicit default
+ # value set and is setup to match at least one value.
+ if attrs.get("default", UNSET) is UNSET:
+ required = attrs.get("nargs", 1) > 0
+ # If the argument has a default value, it is not required.
+ else:
+ required = False
+
+ if "multiple" in attrs:
+ raise TypeError("__init__() got an unexpected keyword argument 'multiple'.")
+
+ super().__init__(param_decls, required=required, **attrs)
+
+ @property
+ def human_readable_name(self) -> str:
+ if self.metavar is not None:
+ return self.metavar
+ return self.name.upper() # type: ignore
+
+ def make_metavar(self, ctx: Context) -> str:
+ if self.metavar is not None:
+ return self.metavar
+ var = self.type.get_metavar(param=self, ctx=ctx)
+ if not var:
+ var = self.name.upper() # type: ignore
+ if self.deprecated:
+ var += "!"
+ if not self.required:
+ var = f"[{var}]"
+ if self.nargs != 1:
+ var += "..."
+ return var
+
+ def _parse_decls(
+ self, decls: cabc.Sequence[str], expose_value: bool
+ ) -> tuple[str | None, list[str], list[str]]:
+ if not decls:
+ if not expose_value:
+ return None, [], []
+ raise TypeError("Argument is marked as exposed, but does not have a name.")
+ if len(decls) == 1:
+ name = arg = decls[0]
+ name = name.replace("-", "_").lower()
+ else:
+ raise TypeError(
+ "Arguments take exactly one parameter declaration, got"
+ f" {len(decls)}: {decls}."
+ )
+ return name, [arg], []
+
+ def get_usage_pieces(self, ctx: Context) -> list[str]:
+ return [self.make_metavar(ctx)]
+
+ def get_error_hint(self, ctx: Context) -> str:
+ return f"'{self.make_metavar(ctx)}'"
+
+ def add_to_parser(self, parser: _OptionParser, ctx: Context) -> None:
+ parser.add_argument(dest=self.name, nargs=self.nargs, obj=self)
+
+
+def __getattr__(name: str) -> object:
+ import warnings
+
+ if name == "BaseCommand":
+ warnings.warn(
+ "'BaseCommand' is deprecated and will be removed in Click 9.0. Use"
+ " 'Command' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _BaseCommand
+
+ if name == "MultiCommand":
+ warnings.warn(
+ "'MultiCommand' is deprecated and will be removed in Click 9.0. Use"
+ " 'Group' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return _MultiCommand
+
+ raise AttributeError(name)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/decorators.py b/stripe_config/.venv/lib/python3.12/site-packages/click/decorators.py
new file mode 100644
index 0000000..21f4c34
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/decorators.py
@@ -0,0 +1,551 @@
+from __future__ import annotations
+
+import inspect
+import typing as t
+from functools import update_wrapper
+from gettext import gettext as _
+
+from .core import Argument
+from .core import Command
+from .core import Context
+from .core import Group
+from .core import Option
+from .core import Parameter
+from .globals import get_current_context
+from .utils import echo
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ P = te.ParamSpec("P")
+
+R = t.TypeVar("R")
+T = t.TypeVar("T")
+_AnyCallable = t.Callable[..., t.Any]
+FC = t.TypeVar("FC", bound="_AnyCallable | Command")
+
+
+def pass_context(f: t.Callable[te.Concatenate[Context, P], R]) -> t.Callable[P, R]:
+ """Marks a callback as wanting to receive the current context
+ object as first argument.
+ """
+
+ def new_func(*args: P.args, **kwargs: P.kwargs) -> R:
+ return f(get_current_context(), *args, **kwargs)
+
+ return update_wrapper(new_func, f)
+
+
+def pass_obj(f: t.Callable[te.Concatenate[T, P], R]) -> t.Callable[P, R]:
+ """Similar to :func:`pass_context`, but only pass the object on the
+ context onwards (:attr:`Context.obj`). This is useful if that object
+ represents the state of a nested system.
+ """
+
+ def new_func(*args: P.args, **kwargs: P.kwargs) -> R:
+ return f(get_current_context().obj, *args, **kwargs)
+
+ return update_wrapper(new_func, f)
+
+
+def make_pass_decorator(
+ object_type: type[T], ensure: bool = False
+) -> t.Callable[[t.Callable[te.Concatenate[T, P], R]], t.Callable[P, R]]:
+ """Given an object type this creates a decorator that will work
+ similar to :func:`pass_obj` but instead of passing the object of the
+ current context, it will find the innermost context of type
+ :func:`object_type`.
+
+ This generates a decorator that works roughly like this::
+
+ from functools import update_wrapper
+
+ def decorator(f):
+ @pass_context
+ def new_func(ctx, *args, **kwargs):
+ obj = ctx.find_object(object_type)
+ return ctx.invoke(f, obj, *args, **kwargs)
+ return update_wrapper(new_func, f)
+ return decorator
+
+ :param object_type: the type of the object to pass.
+ :param ensure: if set to `True`, a new object will be created and
+ remembered on the context if it's not there yet.
+ """
+
+ def decorator(f: t.Callable[te.Concatenate[T, P], R]) -> t.Callable[P, R]:
+ def new_func(*args: P.args, **kwargs: P.kwargs) -> R:
+ ctx = get_current_context()
+
+ obj: T | None
+ if ensure:
+ obj = ctx.ensure_object(object_type)
+ else:
+ obj = ctx.find_object(object_type)
+
+ if obj is None:
+ raise RuntimeError(
+ "Managed to invoke callback without a context"
+ f" object of type {object_type.__name__!r}"
+ " existing."
+ )
+
+ return ctx.invoke(f, obj, *args, **kwargs)
+
+ return update_wrapper(new_func, f)
+
+ return decorator
+
+
+def pass_meta_key(
+ key: str, *, doc_description: str | None = None
+) -> t.Callable[[t.Callable[te.Concatenate[T, P], R]], t.Callable[P, R]]:
+ """Create a decorator that passes a key from
+ :attr:`click.Context.meta` as the first argument to the decorated
+ function.
+
+ :param key: Key in ``Context.meta`` to pass.
+ :param doc_description: Description of the object being passed,
+ inserted into the decorator's docstring. Defaults to "the 'key'
+ key from Context.meta".
+
+ .. versionadded:: 8.0
+ """
+
+ def decorator(f: t.Callable[te.Concatenate[T, P], R]) -> t.Callable[P, R]:
+ def new_func(*args: P.args, **kwargs: P.kwargs) -> R:
+ ctx = get_current_context()
+ obj = ctx.meta[key]
+ return ctx.invoke(f, obj, *args, **kwargs)
+
+ return update_wrapper(new_func, f)
+
+ if doc_description is None:
+ doc_description = f"the {key!r} key from :attr:`click.Context.meta`"
+
+ decorator.__doc__ = (
+ f"Decorator that passes {doc_description} as the first argument"
+ " to the decorated function."
+ )
+ return decorator
+
+
+CmdType = t.TypeVar("CmdType", bound=Command)
+
+
+# variant: no call, directly as decorator for a function.
+@t.overload
+def command(name: _AnyCallable) -> Command: ...
+
+
+# variant: with positional name and with positional or keyword cls argument:
+# @command(namearg, CommandCls, ...) or @command(namearg, cls=CommandCls, ...)
+@t.overload
+def command(
+ name: str | None,
+ cls: type[CmdType],
+ **attrs: t.Any,
+) -> t.Callable[[_AnyCallable], CmdType]: ...
+
+
+# variant: name omitted, cls _must_ be a keyword argument, @command(cls=CommandCls, ...)
+@t.overload
+def command(
+ name: None = None,
+ *,
+ cls: type[CmdType],
+ **attrs: t.Any,
+) -> t.Callable[[_AnyCallable], CmdType]: ...
+
+
+# variant: with optional string name, no cls argument provided.
+@t.overload
+def command(
+ name: str | None = ..., cls: None = None, **attrs: t.Any
+) -> t.Callable[[_AnyCallable], Command]: ...
+
+
+def command(
+ name: str | _AnyCallable | None = None,
+ cls: type[CmdType] | None = None,
+ **attrs: t.Any,
+) -> Command | t.Callable[[_AnyCallable], Command | CmdType]:
+ r"""Creates a new :class:`Command` and uses the decorated function as
+ callback. This will also automatically attach all decorated
+ :func:`option`\s and :func:`argument`\s as parameters to the command.
+
+ The name of the command defaults to the name of the function, converted to
+ lowercase, with underscores ``_`` replaced by dashes ``-``, and the suffixes
+ ``_command``, ``_cmd``, ``_group``, and ``_grp`` are removed. For example,
+ ``init_data_command`` becomes ``init-data``.
+
+ All keyword arguments are forwarded to the underlying command class.
+ For the ``params`` argument, any decorated params are appended to
+ the end of the list.
+
+ Once decorated the function turns into a :class:`Command` instance
+ that can be invoked as a command line utility or be attached to a
+ command :class:`Group`.
+
+ :param name: The name of the command. Defaults to modifying the function's
+ name as described above.
+ :param cls: The command class to create. Defaults to :class:`Command`.
+
+ .. versionchanged:: 8.2
+ The suffixes ``_command``, ``_cmd``, ``_group``, and ``_grp`` are
+ removed when generating the name.
+
+ .. versionchanged:: 8.1
+ This decorator can be applied without parentheses.
+
+ .. versionchanged:: 8.1
+ The ``params`` argument can be used. Decorated params are
+ appended to the end of the list.
+ """
+
+ func: t.Callable[[_AnyCallable], t.Any] | None = None
+
+ if callable(name):
+ func = name
+ name = None
+ assert cls is None, "Use 'command(cls=cls)(callable)' to specify a class."
+ assert not attrs, "Use 'command(**kwargs)(callable)' to provide arguments."
+
+ if cls is None:
+ cls = t.cast("type[CmdType]", Command)
+
+ def decorator(f: _AnyCallable) -> CmdType:
+ if isinstance(f, Command):
+ raise TypeError("Attempted to convert a callback into a command twice.")
+
+ attr_params = attrs.pop("params", None)
+ params = attr_params if attr_params is not None else []
+
+ try:
+ decorator_params = f.__click_params__ # type: ignore
+ except AttributeError:
+ pass
+ else:
+ del f.__click_params__ # type: ignore
+ params.extend(reversed(decorator_params))
+
+ if attrs.get("help") is None:
+ attrs["help"] = f.__doc__
+
+ if t.TYPE_CHECKING:
+ assert cls is not None
+ assert not callable(name)
+
+ if name is not None:
+ cmd_name = name
+ else:
+ cmd_name = f.__name__.lower().replace("_", "-")
+ cmd_left, sep, suffix = cmd_name.rpartition("-")
+
+ if sep and suffix in {"command", "cmd", "group", "grp"}:
+ cmd_name = cmd_left
+
+ cmd = cls(name=cmd_name, callback=f, params=params, **attrs)
+ cmd.__doc__ = f.__doc__
+ return cmd
+
+ if func is not None:
+ return decorator(func)
+
+ return decorator
+
+
+GrpType = t.TypeVar("GrpType", bound=Group)
+
+
+# variant: no call, directly as decorator for a function.
+@t.overload
+def group(name: _AnyCallable) -> Group: ...
+
+
+# variant: with positional name and with positional or keyword cls argument:
+# @group(namearg, GroupCls, ...) or @group(namearg, cls=GroupCls, ...)
+@t.overload
+def group(
+ name: str | None,
+ cls: type[GrpType],
+ **attrs: t.Any,
+) -> t.Callable[[_AnyCallable], GrpType]: ...
+
+
+# variant: name omitted, cls _must_ be a keyword argument, @group(cmd=GroupCls, ...)
+@t.overload
+def group(
+ name: None = None,
+ *,
+ cls: type[GrpType],
+ **attrs: t.Any,
+) -> t.Callable[[_AnyCallable], GrpType]: ...
+
+
+# variant: with optional string name, no cls argument provided.
+@t.overload
+def group(
+ name: str | None = ..., cls: None = None, **attrs: t.Any
+) -> t.Callable[[_AnyCallable], Group]: ...
+
+
+def group(
+ name: str | _AnyCallable | None = None,
+ cls: type[GrpType] | None = None,
+ **attrs: t.Any,
+) -> Group | t.Callable[[_AnyCallable], Group | GrpType]:
+ """Creates a new :class:`Group` with a function as callback. This
+ works otherwise the same as :func:`command` just that the `cls`
+ parameter is set to :class:`Group`.
+
+ .. versionchanged:: 8.1
+ This decorator can be applied without parentheses.
+ """
+ if cls is None:
+ cls = t.cast("type[GrpType]", Group)
+
+ if callable(name):
+ return command(cls=cls, **attrs)(name)
+
+ return command(name, cls, **attrs)
+
+
+def _param_memo(f: t.Callable[..., t.Any], param: Parameter) -> None:
+ if isinstance(f, Command):
+ f.params.append(param)
+ else:
+ if not hasattr(f, "__click_params__"):
+ f.__click_params__ = [] # type: ignore
+
+ f.__click_params__.append(param) # type: ignore
+
+
+def argument(
+ *param_decls: str, cls: type[Argument] | None = None, **attrs: t.Any
+) -> t.Callable[[FC], FC]:
+ """Attaches an argument to the command. All positional arguments are
+ passed as parameter declarations to :class:`Argument`; all keyword
+ arguments are forwarded unchanged (except ``cls``).
+ This is equivalent to creating an :class:`Argument` instance manually
+ and attaching it to the :attr:`Command.params` list.
+
+ For the default argument class, refer to :class:`Argument` and
+ :class:`Parameter` for descriptions of parameters.
+
+ :param cls: the argument class to instantiate. This defaults to
+ :class:`Argument`.
+ :param param_decls: Passed as positional arguments to the constructor of
+ ``cls``.
+ :param attrs: Passed as keyword arguments to the constructor of ``cls``.
+ """
+ if cls is None:
+ cls = Argument
+
+ def decorator(f: FC) -> FC:
+ _param_memo(f, cls(param_decls, **attrs))
+ return f
+
+ return decorator
+
+
+def option(
+ *param_decls: str, cls: type[Option] | None = None, **attrs: t.Any
+) -> t.Callable[[FC], FC]:
+ """Attaches an option to the command. All positional arguments are
+ passed as parameter declarations to :class:`Option`; all keyword
+ arguments are forwarded unchanged (except ``cls``).
+ This is equivalent to creating an :class:`Option` instance manually
+ and attaching it to the :attr:`Command.params` list.
+
+ For the default option class, refer to :class:`Option` and
+ :class:`Parameter` for descriptions of parameters.
+
+ :param cls: the option class to instantiate. This defaults to
+ :class:`Option`.
+ :param param_decls: Passed as positional arguments to the constructor of
+ ``cls``.
+ :param attrs: Passed as keyword arguments to the constructor of ``cls``.
+ """
+ if cls is None:
+ cls = Option
+
+ def decorator(f: FC) -> FC:
+ _param_memo(f, cls(param_decls, **attrs))
+ return f
+
+ return decorator
+
+
+def confirmation_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]:
+ """Add a ``--yes`` option which shows a prompt before continuing if
+ not passed. If the prompt is declined, the program will exit.
+
+ :param param_decls: One or more option names. Defaults to the single
+ value ``"--yes"``.
+ :param kwargs: Extra arguments are passed to :func:`option`.
+ """
+
+ def callback(ctx: Context, param: Parameter, value: bool) -> None:
+ if not value:
+ ctx.abort()
+
+ if not param_decls:
+ param_decls = ("--yes",)
+
+ kwargs.setdefault("is_flag", True)
+ kwargs.setdefault("callback", callback)
+ kwargs.setdefault("expose_value", False)
+ kwargs.setdefault("prompt", "Do you want to continue?")
+ kwargs.setdefault("help", "Confirm the action without prompting.")
+ return option(*param_decls, **kwargs)
+
+
+def password_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]:
+ """Add a ``--password`` option which prompts for a password, hiding
+ input and asking to enter the value again for confirmation.
+
+ :param param_decls: One or more option names. Defaults to the single
+ value ``"--password"``.
+ :param kwargs: Extra arguments are passed to :func:`option`.
+ """
+ if not param_decls:
+ param_decls = ("--password",)
+
+ kwargs.setdefault("prompt", True)
+ kwargs.setdefault("confirmation_prompt", True)
+ kwargs.setdefault("hide_input", True)
+ return option(*param_decls, **kwargs)
+
+
+def version_option(
+ version: str | None = None,
+ *param_decls: str,
+ package_name: str | None = None,
+ prog_name: str | None = None,
+ message: str | None = None,
+ **kwargs: t.Any,
+) -> t.Callable[[FC], FC]:
+ """Add a ``--version`` option which immediately prints the version
+ number and exits the program.
+
+ If ``version`` is not provided, Click will try to detect it using
+ :func:`importlib.metadata.version` to get the version for the
+ ``package_name``.
+
+ If ``package_name`` is not provided, Click will try to detect it by
+ inspecting the stack frames. This will be used to detect the
+ version, so it must match the name of the installed package.
+
+ :param version: The version number to show. If not provided, Click
+ will try to detect it.
+ :param param_decls: One or more option names. Defaults to the single
+ value ``"--version"``.
+ :param package_name: The package name to detect the version from. If
+ not provided, Click will try to detect it.
+ :param prog_name: The name of the CLI to show in the message. If not
+ provided, it will be detected from the command.
+ :param message: The message to show. The values ``%(prog)s``,
+ ``%(package)s``, and ``%(version)s`` are available. Defaults to
+ ``"%(prog)s, version %(version)s"``.
+ :param kwargs: Extra arguments are passed to :func:`option`.
+ :raise RuntimeError: ``version`` could not be detected.
+
+ .. versionchanged:: 8.0
+ Add the ``package_name`` parameter, and the ``%(package)s``
+ value for messages.
+
+ .. versionchanged:: 8.0
+ Use :mod:`importlib.metadata` instead of ``pkg_resources``. The
+ version is detected based on the package name, not the entry
+ point name. The Python package name must match the installed
+ package name, or be passed with ``package_name=``.
+ """
+ if message is None:
+ message = _("%(prog)s, version %(version)s")
+
+ if version is None and package_name is None:
+ frame = inspect.currentframe()
+ f_back = frame.f_back if frame is not None else None
+ f_globals = f_back.f_globals if f_back is not None else None
+ # break reference cycle
+ # https://docs.python.org/3/library/inspect.html#the-interpreter-stack
+ del frame
+
+ if f_globals is not None:
+ package_name = f_globals.get("__name__")
+
+ if package_name == "__main__":
+ package_name = f_globals.get("__package__")
+
+ if package_name:
+ package_name = package_name.partition(".")[0]
+
+ def callback(ctx: Context, param: Parameter, value: bool) -> None:
+ if not value or ctx.resilient_parsing:
+ return
+
+ nonlocal prog_name
+ nonlocal version
+
+ if prog_name is None:
+ prog_name = ctx.find_root().info_name
+
+ if version is None and package_name is not None:
+ import importlib.metadata
+
+ try:
+ version = importlib.metadata.version(package_name)
+ except importlib.metadata.PackageNotFoundError:
+ raise RuntimeError(
+ f"{package_name!r} is not installed. Try passing"
+ " 'package_name' instead."
+ ) from None
+
+ if version is None:
+ raise RuntimeError(
+ f"Could not determine the version for {package_name!r} automatically."
+ )
+
+ echo(
+ message % {"prog": prog_name, "package": package_name, "version": version},
+ color=ctx.color,
+ )
+ ctx.exit()
+
+ if not param_decls:
+ param_decls = ("--version",)
+
+ kwargs.setdefault("is_flag", True)
+ kwargs.setdefault("expose_value", False)
+ kwargs.setdefault("is_eager", True)
+ kwargs.setdefault("help", _("Show the version and exit."))
+ kwargs["callback"] = callback
+ return option(*param_decls, **kwargs)
+
+
+def help_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]:
+ """Pre-configured ``--help`` option which immediately prints the help page
+ and exits the program.
+
+ :param param_decls: One or more option names. Defaults to the single
+ value ``"--help"``.
+ :param kwargs: Extra arguments are passed to :func:`option`.
+ """
+
+ def show_help(ctx: Context, param: Parameter, value: bool) -> None:
+ """Callback that print the help page on ```` and exits."""
+ if value and not ctx.resilient_parsing:
+ echo(ctx.get_help(), color=ctx.color)
+ ctx.exit()
+
+ if not param_decls:
+ param_decls = ("--help",)
+
+ kwargs.setdefault("is_flag", True)
+ kwargs.setdefault("expose_value", False)
+ kwargs.setdefault("is_eager", True)
+ kwargs.setdefault("help", _("Show this message and exit."))
+ kwargs.setdefault("callback", show_help)
+
+ return option(*param_decls, **kwargs)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/exceptions.py b/stripe_config/.venv/lib/python3.12/site-packages/click/exceptions.py
new file mode 100644
index 0000000..4d782ee
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/exceptions.py
@@ -0,0 +1,308 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import typing as t
+from gettext import gettext as _
+from gettext import ngettext
+
+from ._compat import get_text_stderr
+from .globals import resolve_color_default
+from .utils import echo
+from .utils import format_filename
+
+if t.TYPE_CHECKING:
+ from .core import Command
+ from .core import Context
+ from .core import Parameter
+
+
+def _join_param_hints(param_hint: cabc.Sequence[str] | str | None) -> str | None:
+ if param_hint is not None and not isinstance(param_hint, str):
+ return " / ".join(repr(x) for x in param_hint)
+
+ return param_hint
+
+
+class ClickException(Exception):
+ """An exception that Click can handle and show to the user."""
+
+ #: The exit code for this exception.
+ exit_code = 1
+
+ def __init__(self, message: str) -> None:
+ super().__init__(message)
+ # The context will be removed by the time we print the message, so cache
+ # the color settings here to be used later on (in `show`)
+ self.show_color: bool | None = resolve_color_default()
+ self.message = message
+
+ def format_message(self) -> str:
+ return self.message
+
+ def __str__(self) -> str:
+ return self.message
+
+ def show(self, file: t.IO[t.Any] | None = None) -> None:
+ if file is None:
+ file = get_text_stderr()
+
+ echo(
+ _("Error: {message}").format(message=self.format_message()),
+ file=file,
+ color=self.show_color,
+ )
+
+
+class UsageError(ClickException):
+ """An internal exception that signals a usage error. This typically
+ aborts any further handling.
+
+ :param message: the error message to display.
+ :param ctx: optionally the context that caused this error. Click will
+ fill in the context automatically in some situations.
+ """
+
+ exit_code = 2
+
+ def __init__(self, message: str, ctx: Context | None = None) -> None:
+ super().__init__(message)
+ self.ctx = ctx
+ self.cmd: Command | None = self.ctx.command if self.ctx else None
+
+ def show(self, file: t.IO[t.Any] | None = None) -> None:
+ if file is None:
+ file = get_text_stderr()
+ color = None
+ hint = ""
+ if (
+ self.ctx is not None
+ and self.ctx.command.get_help_option(self.ctx) is not None
+ ):
+ hint = _("Try '{command} {option}' for help.").format(
+ command=self.ctx.command_path, option=self.ctx.help_option_names[0]
+ )
+ hint = f"{hint}\n"
+ if self.ctx is not None:
+ color = self.ctx.color
+ echo(f"{self.ctx.get_usage()}\n{hint}", file=file, color=color)
+ echo(
+ _("Error: {message}").format(message=self.format_message()),
+ file=file,
+ color=color,
+ )
+
+
+class BadParameter(UsageError):
+ """An exception that formats out a standardized error message for a
+ bad parameter. This is useful when thrown from a callback or type as
+ Click will attach contextual information to it (for instance, which
+ parameter it is).
+
+ .. versionadded:: 2.0
+
+ :param param: the parameter object that caused this error. This can
+ be left out, and Click will attach this info itself
+ if possible.
+ :param param_hint: a string that shows up as parameter name. This
+ can be used as alternative to `param` in cases
+ where custom validation should happen. If it is
+ a string it's used as such, if it's a list then
+ each item is quoted and separated.
+ """
+
+ def __init__(
+ self,
+ message: str,
+ ctx: Context | None = None,
+ param: Parameter | None = None,
+ param_hint: cabc.Sequence[str] | str | None = None,
+ ) -> None:
+ super().__init__(message, ctx)
+ self.param = param
+ self.param_hint = param_hint
+
+ def format_message(self) -> str:
+ if self.param_hint is not None:
+ param_hint = self.param_hint
+ elif self.param is not None:
+ param_hint = self.param.get_error_hint(self.ctx) # type: ignore
+ else:
+ return _("Invalid value: {message}").format(message=self.message)
+
+ return _("Invalid value for {param_hint}: {message}").format(
+ param_hint=_join_param_hints(param_hint), message=self.message
+ )
+
+
+class MissingParameter(BadParameter):
+ """Raised if click required an option or argument but it was not
+ provided when invoking the script.
+
+ .. versionadded:: 4.0
+
+ :param param_type: a string that indicates the type of the parameter.
+ The default is to inherit the parameter type from
+ the given `param`. Valid values are ``'parameter'``,
+ ``'option'`` or ``'argument'``.
+ """
+
+ def __init__(
+ self,
+ message: str | None = None,
+ ctx: Context | None = None,
+ param: Parameter | None = None,
+ param_hint: cabc.Sequence[str] | str | None = None,
+ param_type: str | None = None,
+ ) -> None:
+ super().__init__(message or "", ctx, param, param_hint)
+ self.param_type = param_type
+
+ def format_message(self) -> str:
+ if self.param_hint is not None:
+ param_hint: cabc.Sequence[str] | str | None = self.param_hint
+ elif self.param is not None:
+ param_hint = self.param.get_error_hint(self.ctx) # type: ignore
+ else:
+ param_hint = None
+
+ param_hint = _join_param_hints(param_hint)
+ param_hint = f" {param_hint}" if param_hint else ""
+
+ param_type = self.param_type
+ if param_type is None and self.param is not None:
+ param_type = self.param.param_type_name
+
+ msg = self.message
+ if self.param is not None:
+ msg_extra = self.param.type.get_missing_message(
+ param=self.param, ctx=self.ctx
+ )
+ if msg_extra:
+ if msg:
+ msg += f". {msg_extra}"
+ else:
+ msg = msg_extra
+
+ msg = f" {msg}" if msg else ""
+
+ # Translate param_type for known types.
+ if param_type == "argument":
+ missing = _("Missing argument")
+ elif param_type == "option":
+ missing = _("Missing option")
+ elif param_type == "parameter":
+ missing = _("Missing parameter")
+ else:
+ missing = _("Missing {param_type}").format(param_type=param_type)
+
+ return f"{missing}{param_hint}.{msg}"
+
+ def __str__(self) -> str:
+ if not self.message:
+ param_name = self.param.name if self.param else None
+ return _("Missing parameter: {param_name}").format(param_name=param_name)
+ else:
+ return self.message
+
+
+class NoSuchOption(UsageError):
+ """Raised if click attempted to handle an option that does not
+ exist.
+
+ .. versionadded:: 4.0
+ """
+
+ def __init__(
+ self,
+ option_name: str,
+ message: str | None = None,
+ possibilities: cabc.Sequence[str] | None = None,
+ ctx: Context | None = None,
+ ) -> None:
+ if message is None:
+ message = _("No such option: {name}").format(name=option_name)
+
+ super().__init__(message, ctx)
+ self.option_name = option_name
+ self.possibilities = possibilities
+
+ def format_message(self) -> str:
+ if not self.possibilities:
+ return self.message
+
+ possibility_str = ", ".join(sorted(self.possibilities))
+ suggest = ngettext(
+ "Did you mean {possibility}?",
+ "(Possible options: {possibilities})",
+ len(self.possibilities),
+ ).format(possibility=possibility_str, possibilities=possibility_str)
+ return f"{self.message} {suggest}"
+
+
+class BadOptionUsage(UsageError):
+ """Raised if an option is generally supplied but the use of the option
+ was incorrect. This is for instance raised if the number of arguments
+ for an option is not correct.
+
+ .. versionadded:: 4.0
+
+ :param option_name: the name of the option being used incorrectly.
+ """
+
+ def __init__(
+ self, option_name: str, message: str, ctx: Context | None = None
+ ) -> None:
+ super().__init__(message, ctx)
+ self.option_name = option_name
+
+
+class BadArgumentUsage(UsageError):
+ """Raised if an argument is generally supplied but the use of the argument
+ was incorrect. This is for instance raised if the number of values
+ for an argument is not correct.
+
+ .. versionadded:: 6.0
+ """
+
+
+class NoArgsIsHelpError(UsageError):
+ def __init__(self, ctx: Context) -> None:
+ self.ctx: Context
+ super().__init__(ctx.get_help(), ctx=ctx)
+
+ def show(self, file: t.IO[t.Any] | None = None) -> None:
+ echo(self.format_message(), file=file, err=True, color=self.ctx.color)
+
+
+class FileError(ClickException):
+ """Raised if a file cannot be opened."""
+
+ def __init__(self, filename: str, hint: str | None = None) -> None:
+ if hint is None:
+ hint = _("unknown error")
+
+ super().__init__(hint)
+ self.ui_filename: str = format_filename(filename)
+ self.filename = filename
+
+ def format_message(self) -> str:
+ return _("Could not open file {filename!r}: {message}").format(
+ filename=self.ui_filename, message=self.message
+ )
+
+
+class Abort(RuntimeError):
+ """An internal signalling exception that signals Click to abort."""
+
+
+class Exit(RuntimeError):
+ """An exception that indicates that the application should exit with some
+ status code.
+
+ :param code: the status code to exit with.
+ """
+
+ __slots__ = ("exit_code",)
+
+ def __init__(self, code: int = 0) -> None:
+ self.exit_code: int = code
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/formatting.py b/stripe_config/.venv/lib/python3.12/site-packages/click/formatting.py
new file mode 100644
index 0000000..0b64f83
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/formatting.py
@@ -0,0 +1,301 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+from contextlib import contextmanager
+from gettext import gettext as _
+
+from ._compat import term_len
+from .parser import _split_opt
+
+# Can force a width. This is used by the test system
+FORCED_WIDTH: int | None = None
+
+
+def measure_table(rows: cabc.Iterable[tuple[str, str]]) -> tuple[int, ...]:
+ widths: dict[int, int] = {}
+
+ for row in rows:
+ for idx, col in enumerate(row):
+ widths[idx] = max(widths.get(idx, 0), term_len(col))
+
+ return tuple(y for x, y in sorted(widths.items()))
+
+
+def iter_rows(
+ rows: cabc.Iterable[tuple[str, str]], col_count: int
+) -> cabc.Iterator[tuple[str, ...]]:
+ for row in rows:
+ yield row + ("",) * (col_count - len(row))
+
+
+def wrap_text(
+ text: str,
+ width: int = 78,
+ initial_indent: str = "",
+ subsequent_indent: str = "",
+ preserve_paragraphs: bool = False,
+) -> str:
+ """A helper function that intelligently wraps text. By default, it
+ assumes that it operates on a single paragraph of text but if the
+ `preserve_paragraphs` parameter is provided it will intelligently
+ handle paragraphs (defined by two empty lines).
+
+ If paragraphs are handled, a paragraph can be prefixed with an empty
+ line containing the ``\\b`` character (``\\x08``) to indicate that
+ no rewrapping should happen in that block.
+
+ :param text: the text that should be rewrapped.
+ :param width: the maximum width for the text.
+ :param initial_indent: the initial indent that should be placed on the
+ first line as a string.
+ :param subsequent_indent: the indent string that should be placed on
+ each consecutive line.
+ :param preserve_paragraphs: if this flag is set then the wrapping will
+ intelligently handle paragraphs.
+ """
+ from ._textwrap import TextWrapper
+
+ text = text.expandtabs()
+ wrapper = TextWrapper(
+ width,
+ initial_indent=initial_indent,
+ subsequent_indent=subsequent_indent,
+ replace_whitespace=False,
+ )
+ if not preserve_paragraphs:
+ return wrapper.fill(text)
+
+ p: list[tuple[int, bool, str]] = []
+ buf: list[str] = []
+ indent = None
+
+ def _flush_par() -> None:
+ if not buf:
+ return
+ if buf[0].strip() == "\b":
+ p.append((indent or 0, True, "\n".join(buf[1:])))
+ else:
+ p.append((indent or 0, False, " ".join(buf)))
+ del buf[:]
+
+ for line in text.splitlines():
+ if not line:
+ _flush_par()
+ indent = None
+ else:
+ if indent is None:
+ orig_len = term_len(line)
+ line = line.lstrip()
+ indent = orig_len - term_len(line)
+ buf.append(line)
+ _flush_par()
+
+ rv = []
+ for indent, raw, text in p:
+ with wrapper.extra_indent(" " * indent):
+ if raw:
+ rv.append(wrapper.indent_only(text))
+ else:
+ rv.append(wrapper.fill(text))
+
+ return "\n\n".join(rv)
+
+
+class HelpFormatter:
+ """This class helps with formatting text-based help pages. It's
+ usually just needed for very special internal cases, but it's also
+ exposed so that developers can write their own fancy outputs.
+
+ At present, it always writes into memory.
+
+ :param indent_increment: the additional increment for each level.
+ :param width: the width for the text. This defaults to the terminal
+ width clamped to a maximum of 78.
+ """
+
+ def __init__(
+ self,
+ indent_increment: int = 2,
+ width: int | None = None,
+ max_width: int | None = None,
+ ) -> None:
+ self.indent_increment = indent_increment
+ if max_width is None:
+ max_width = 80
+ if width is None:
+ import shutil
+
+ width = FORCED_WIDTH
+ if width is None:
+ width = max(min(shutil.get_terminal_size().columns, max_width) - 2, 50)
+ self.width = width
+ self.current_indent: int = 0
+ self.buffer: list[str] = []
+
+ def write(self, string: str) -> None:
+ """Writes a unicode string into the internal buffer."""
+ self.buffer.append(string)
+
+ def indent(self) -> None:
+ """Increases the indentation."""
+ self.current_indent += self.indent_increment
+
+ def dedent(self) -> None:
+ """Decreases the indentation."""
+ self.current_indent -= self.indent_increment
+
+ def write_usage(self, prog: str, args: str = "", prefix: str | None = None) -> None:
+ """Writes a usage line into the buffer.
+
+ :param prog: the program name.
+ :param args: whitespace separated list of arguments.
+ :param prefix: The prefix for the first line. Defaults to
+ ``"Usage: "``.
+ """
+ if prefix is None:
+ prefix = f"{_('Usage:')} "
+
+ usage_prefix = f"{prefix:>{self.current_indent}}{prog} "
+ text_width = self.width - self.current_indent
+
+ if text_width >= (term_len(usage_prefix) + 20):
+ # The arguments will fit to the right of the prefix.
+ indent = " " * term_len(usage_prefix)
+ self.write(
+ wrap_text(
+ args,
+ text_width,
+ initial_indent=usage_prefix,
+ subsequent_indent=indent,
+ )
+ )
+ else:
+ # The prefix is too long, put the arguments on the next line.
+ self.write(usage_prefix)
+ self.write("\n")
+ indent = " " * (max(self.current_indent, term_len(prefix)) + 4)
+ self.write(
+ wrap_text(
+ args, text_width, initial_indent=indent, subsequent_indent=indent
+ )
+ )
+
+ self.write("\n")
+
+ def write_heading(self, heading: str) -> None:
+ """Writes a heading into the buffer."""
+ self.write(f"{'':>{self.current_indent}}{heading}:\n")
+
+ def write_paragraph(self) -> None:
+ """Writes a paragraph into the buffer."""
+ if self.buffer:
+ self.write("\n")
+
+ def write_text(self, text: str) -> None:
+ """Writes re-indented text into the buffer. This rewraps and
+ preserves paragraphs.
+ """
+ indent = " " * self.current_indent
+ self.write(
+ wrap_text(
+ text,
+ self.width,
+ initial_indent=indent,
+ subsequent_indent=indent,
+ preserve_paragraphs=True,
+ )
+ )
+ self.write("\n")
+
+ def write_dl(
+ self,
+ rows: cabc.Sequence[tuple[str, str]],
+ col_max: int = 30,
+ col_spacing: int = 2,
+ ) -> None:
+ """Writes a definition list into the buffer. This is how options
+ and commands are usually formatted.
+
+ :param rows: a list of two item tuples for the terms and values.
+ :param col_max: the maximum width of the first column.
+ :param col_spacing: the number of spaces between the first and
+ second column.
+ """
+ rows = list(rows)
+ widths = measure_table(rows)
+ if len(widths) != 2:
+ raise TypeError("Expected two columns for definition list")
+
+ first_col = min(widths[0], col_max) + col_spacing
+
+ for first, second in iter_rows(rows, len(widths)):
+ self.write(f"{'':>{self.current_indent}}{first}")
+ if not second:
+ self.write("\n")
+ continue
+ if term_len(first) <= first_col - col_spacing:
+ self.write(" " * (first_col - term_len(first)))
+ else:
+ self.write("\n")
+ self.write(" " * (first_col + self.current_indent))
+
+ text_width = max(self.width - first_col - 2, 10)
+ wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)
+ lines = wrapped_text.splitlines()
+
+ if lines:
+ self.write(f"{lines[0]}\n")
+
+ for line in lines[1:]:
+ self.write(f"{'':>{first_col + self.current_indent}}{line}\n")
+ else:
+ self.write("\n")
+
+ @contextmanager
+ def section(self, name: str) -> cabc.Iterator[None]:
+ """Helpful context manager that writes a paragraph, a heading,
+ and the indents.
+
+ :param name: the section name that is written as heading.
+ """
+ self.write_paragraph()
+ self.write_heading(name)
+ self.indent()
+ try:
+ yield
+ finally:
+ self.dedent()
+
+ @contextmanager
+ def indentation(self) -> cabc.Iterator[None]:
+ """A context manager that increases the indentation."""
+ self.indent()
+ try:
+ yield
+ finally:
+ self.dedent()
+
+ def getvalue(self) -> str:
+ """Returns the buffer contents."""
+ return "".join(self.buffer)
+
+
+def join_options(options: cabc.Sequence[str]) -> tuple[str, bool]:
+ """Given a list of option strings this joins them in the most appropriate
+ way and returns them in the form ``(formatted_string,
+ any_prefix_is_slash)`` where the second item in the tuple is a flag that
+ indicates if any of the option prefixes was a slash.
+ """
+ rv = []
+ any_prefix_is_slash = False
+
+ for opt in options:
+ prefix = _split_opt(opt)[0]
+
+ if prefix == "/":
+ any_prefix_is_slash = True
+
+ rv.append((len(prefix), opt))
+
+ rv.sort(key=lambda x: x[0])
+ return ", ".join(x[1] for x in rv), any_prefix_is_slash
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/globals.py b/stripe_config/.venv/lib/python3.12/site-packages/click/globals.py
new file mode 100644
index 0000000..a2f9172
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/globals.py
@@ -0,0 +1,67 @@
+from __future__ import annotations
+
+import typing as t
+from threading import local
+
+if t.TYPE_CHECKING:
+ from .core import Context
+
+_local = local()
+
+
+@t.overload
+def get_current_context(silent: t.Literal[False] = False) -> Context: ...
+
+
+@t.overload
+def get_current_context(silent: bool = ...) -> Context | None: ...
+
+
+def get_current_context(silent: bool = False) -> Context | None:
+ """Returns the current click context. This can be used as a way to
+ access the current context object from anywhere. This is a more implicit
+ alternative to the :func:`pass_context` decorator. This function is
+ primarily useful for helpers such as :func:`echo` which might be
+ interested in changing its behavior based on the current context.
+
+ To push the current context, :meth:`Context.scope` can be used.
+
+ .. versionadded:: 5.0
+
+ :param silent: if set to `True` the return value is `None` if no context
+ is available. The default behavior is to raise a
+ :exc:`RuntimeError`.
+ """
+ try:
+ return t.cast("Context", _local.stack[-1])
+ except (AttributeError, IndexError) as e:
+ if not silent:
+ raise RuntimeError("There is no active click context.") from e
+
+ return None
+
+
+def push_context(ctx: Context) -> None:
+ """Pushes a new context to the current stack."""
+ _local.__dict__.setdefault("stack", []).append(ctx)
+
+
+def pop_context() -> None:
+ """Removes the top level from the stack."""
+ _local.stack.pop()
+
+
+def resolve_color_default(color: bool | None = None) -> bool | None:
+ """Internal helper to get the default value of the color flag. If a
+ value is passed it's returned unchanged, otherwise it's looked up from
+ the current context.
+ """
+ if color is not None:
+ return color
+
+ ctx = get_current_context(silent=True)
+
+ if ctx is not None:
+ return ctx.color
+
+ return None
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/parser.py b/stripe_config/.venv/lib/python3.12/site-packages/click/parser.py
new file mode 100644
index 0000000..1ea1f71
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/parser.py
@@ -0,0 +1,532 @@
+"""
+This module started out as largely a copy paste from the stdlib's
+optparse module with the features removed that we do not need from
+optparse because we implement them in Click on a higher level (for
+instance type handling, help formatting and a lot more).
+
+The plan is to remove more and more from here over time.
+
+The reason this is a different module and not optparse from the stdlib
+is that there are differences in 2.x and 3.x about the error messages
+generated and optparse in the stdlib uses gettext for no good reason
+and might cause us issues.
+
+Click uses parts of optparse written by Gregory P. Ward and maintained
+by the Python Software Foundation. This is limited to code in parser.py.
+
+Copyright 2001-2006 Gregory P. Ward. All rights reserved.
+Copyright 2002-2006 Python Software Foundation. All rights reserved.
+"""
+
+# This code uses parts of optparse written by Gregory P. Ward and
+# maintained by the Python Software Foundation.
+# Copyright 2001-2006 Gregory P. Ward
+# Copyright 2002-2006 Python Software Foundation
+from __future__ import annotations
+
+import collections.abc as cabc
+import typing as t
+from collections import deque
+from gettext import gettext as _
+from gettext import ngettext
+
+from ._utils import FLAG_NEEDS_VALUE
+from ._utils import UNSET
+from .exceptions import BadArgumentUsage
+from .exceptions import BadOptionUsage
+from .exceptions import NoSuchOption
+from .exceptions import UsageError
+
+if t.TYPE_CHECKING:
+ from ._utils import T_FLAG_NEEDS_VALUE
+ from ._utils import T_UNSET
+ from .core import Argument as CoreArgument
+ from .core import Context
+ from .core import Option as CoreOption
+ from .core import Parameter as CoreParameter
+
+V = t.TypeVar("V")
+
+
+def _unpack_args(
+ args: cabc.Sequence[str], nargs_spec: cabc.Sequence[int]
+) -> tuple[cabc.Sequence[str | cabc.Sequence[str | None] | None], list[str]]:
+ """Given an iterable of arguments and an iterable of nargs specifications,
+ it returns a tuple with all the unpacked arguments at the first index
+ and all remaining arguments as the second.
+
+ The nargs specification is the number of arguments that should be consumed
+ or `-1` to indicate that this position should eat up all the remainders.
+
+ Missing items are filled with ``UNSET``.
+ """
+ args = deque(args)
+ nargs_spec = deque(nargs_spec)
+ rv: list[str | tuple[str | T_UNSET, ...] | T_UNSET] = []
+ spos: int | None = None
+
+ def _fetch(c: deque[V]) -> V | T_UNSET:
+ try:
+ if spos is None:
+ return c.popleft()
+ else:
+ return c.pop()
+ except IndexError:
+ return UNSET
+
+ while nargs_spec:
+ nargs = _fetch(nargs_spec)
+
+ if nargs is None:
+ continue
+
+ if nargs == 1:
+ rv.append(_fetch(args)) # type: ignore[arg-type]
+ elif nargs > 1:
+ x = [_fetch(args) for _ in range(nargs)]
+
+ # If we're reversed, we're pulling in the arguments in reverse,
+ # so we need to turn them around.
+ if spos is not None:
+ x.reverse()
+
+ rv.append(tuple(x))
+ elif nargs < 0:
+ if spos is not None:
+ raise TypeError("Cannot have two nargs < 0")
+
+ spos = len(rv)
+ rv.append(UNSET)
+
+ # spos is the position of the wildcard (star). If it's not `None`,
+ # we fill it with the remainder.
+ if spos is not None:
+ rv[spos] = tuple(args)
+ args = []
+ rv[spos + 1 :] = reversed(rv[spos + 1 :])
+
+ return tuple(rv), list(args)
+
+
+def _split_opt(opt: str) -> tuple[str, str]:
+ first = opt[:1]
+ if first.isalnum():
+ return "", opt
+ if opt[1:2] == first:
+ return opt[:2], opt[2:]
+ return first, opt[1:]
+
+
+def _normalize_opt(opt: str, ctx: Context | None) -> str:
+ if ctx is None or ctx.token_normalize_func is None:
+ return opt
+ prefix, opt = _split_opt(opt)
+ return f"{prefix}{ctx.token_normalize_func(opt)}"
+
+
+class _Option:
+ def __init__(
+ self,
+ obj: CoreOption,
+ opts: cabc.Sequence[str],
+ dest: str | None,
+ action: str | None = None,
+ nargs: int = 1,
+ const: t.Any | None = None,
+ ):
+ self._short_opts = []
+ self._long_opts = []
+ self.prefixes: set[str] = set()
+
+ for opt in opts:
+ prefix, value = _split_opt(opt)
+ if not prefix:
+ raise ValueError(f"Invalid start character for option ({opt})")
+ self.prefixes.add(prefix[0])
+ if len(prefix) == 1 and len(value) == 1:
+ self._short_opts.append(opt)
+ else:
+ self._long_opts.append(opt)
+ self.prefixes.add(prefix)
+
+ if action is None:
+ action = "store"
+
+ self.dest = dest
+ self.action = action
+ self.nargs = nargs
+ self.const = const
+ self.obj = obj
+
+ @property
+ def takes_value(self) -> bool:
+ return self.action in ("store", "append")
+
+ def process(self, value: t.Any, state: _ParsingState) -> None:
+ if self.action == "store":
+ state.opts[self.dest] = value # type: ignore
+ elif self.action == "store_const":
+ state.opts[self.dest] = self.const # type: ignore
+ elif self.action == "append":
+ state.opts.setdefault(self.dest, []).append(value) # type: ignore
+ elif self.action == "append_const":
+ state.opts.setdefault(self.dest, []).append(self.const) # type: ignore
+ elif self.action == "count":
+ state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore
+ else:
+ raise ValueError(f"unknown action '{self.action}'")
+ state.order.append(self.obj)
+
+
+class _Argument:
+ def __init__(self, obj: CoreArgument, dest: str | None, nargs: int = 1):
+ self.dest = dest
+ self.nargs = nargs
+ self.obj = obj
+
+ def process(
+ self,
+ value: str | cabc.Sequence[str | None] | None | T_UNSET,
+ state: _ParsingState,
+ ) -> None:
+ if self.nargs > 1:
+ assert isinstance(value, cabc.Sequence)
+ holes = sum(1 for x in value if x is UNSET)
+ if holes == len(value):
+ value = UNSET
+ elif holes != 0:
+ raise BadArgumentUsage(
+ _("Argument {name!r} takes {nargs} values.").format(
+ name=self.dest, nargs=self.nargs
+ )
+ )
+
+ # We failed to collect any argument value so we consider the argument as unset.
+ if value == ():
+ value = UNSET
+
+ state.opts[self.dest] = value # type: ignore
+ state.order.append(self.obj)
+
+
+class _ParsingState:
+ def __init__(self, rargs: list[str]) -> None:
+ self.opts: dict[str, t.Any] = {}
+ self.largs: list[str] = []
+ self.rargs = rargs
+ self.order: list[CoreParameter] = []
+
+
+class _OptionParser:
+ """The option parser is an internal class that is ultimately used to
+ parse options and arguments. It's modelled after optparse and brings
+ a similar but vastly simplified API. It should generally not be used
+ directly as the high level Click classes wrap it for you.
+
+ It's not nearly as extensible as optparse or argparse as it does not
+ implement features that are implemented on a higher level (such as
+ types or defaults).
+
+ :param ctx: optionally the :class:`~click.Context` where this parser
+ should go with.
+
+ .. deprecated:: 8.2
+ Will be removed in Click 9.0.
+ """
+
+ def __init__(self, ctx: Context | None = None) -> None:
+ #: The :class:`~click.Context` for this parser. This might be
+ #: `None` for some advanced use cases.
+ self.ctx = ctx
+ #: This controls how the parser deals with interspersed arguments.
+ #: If this is set to `False`, the parser will stop on the first
+ #: non-option. Click uses this to implement nested subcommands
+ #: safely.
+ self.allow_interspersed_args: bool = True
+ #: This tells the parser how to deal with unknown options. By
+ #: default it will error out (which is sensible), but there is a
+ #: second mode where it will ignore it and continue processing
+ #: after shifting all the unknown options into the resulting args.
+ self.ignore_unknown_options: bool = False
+
+ if ctx is not None:
+ self.allow_interspersed_args = ctx.allow_interspersed_args
+ self.ignore_unknown_options = ctx.ignore_unknown_options
+
+ self._short_opt: dict[str, _Option] = {}
+ self._long_opt: dict[str, _Option] = {}
+ self._opt_prefixes = {"-", "--"}
+ self._args: list[_Argument] = []
+
+ def add_option(
+ self,
+ obj: CoreOption,
+ opts: cabc.Sequence[str],
+ dest: str | None,
+ action: str | None = None,
+ nargs: int = 1,
+ const: t.Any | None = None,
+ ) -> None:
+ """Adds a new option named `dest` to the parser. The destination
+ is not inferred (unlike with optparse) and needs to be explicitly
+ provided. Action can be any of ``store``, ``store_const``,
+ ``append``, ``append_const`` or ``count``.
+
+ The `obj` can be used to identify the option in the order list
+ that is returned from the parser.
+ """
+ opts = [_normalize_opt(opt, self.ctx) for opt in opts]
+ option = _Option(obj, opts, dest, action=action, nargs=nargs, const=const)
+ self._opt_prefixes.update(option.prefixes)
+ for opt in option._short_opts:
+ self._short_opt[opt] = option
+ for opt in option._long_opts:
+ self._long_opt[opt] = option
+
+ def add_argument(self, obj: CoreArgument, dest: str | None, nargs: int = 1) -> None:
+ """Adds a positional argument named `dest` to the parser.
+
+ The `obj` can be used to identify the option in the order list
+ that is returned from the parser.
+ """
+ self._args.append(_Argument(obj, dest=dest, nargs=nargs))
+
+ def parse_args(
+ self, args: list[str]
+ ) -> tuple[dict[str, t.Any], list[str], list[CoreParameter]]:
+ """Parses positional arguments and returns ``(values, args, order)``
+ for the parsed options and arguments as well as the leftover
+ arguments if there are any. The order is a list of objects as they
+ appear on the command line. If arguments appear multiple times they
+ will be memorized multiple times as well.
+ """
+ state = _ParsingState(args)
+ try:
+ self._process_args_for_options(state)
+ self._process_args_for_args(state)
+ except UsageError:
+ if self.ctx is None or not self.ctx.resilient_parsing:
+ raise
+ return state.opts, state.largs, state.order
+
+ def _process_args_for_args(self, state: _ParsingState) -> None:
+ pargs, args = _unpack_args(
+ state.largs + state.rargs, [x.nargs for x in self._args]
+ )
+
+ for idx, arg in enumerate(self._args):
+ arg.process(pargs[idx], state)
+
+ state.largs = args
+ state.rargs = []
+
+ def _process_args_for_options(self, state: _ParsingState) -> None:
+ while state.rargs:
+ arg = state.rargs.pop(0)
+ arglen = len(arg)
+ # Double dashes always handled explicitly regardless of what
+ # prefixes are valid.
+ if arg == "--":
+ return
+ elif arg[:1] in self._opt_prefixes and arglen > 1:
+ self._process_opts(arg, state)
+ elif self.allow_interspersed_args:
+ state.largs.append(arg)
+ else:
+ state.rargs.insert(0, arg)
+ return
+
+ # Say this is the original argument list:
+ # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
+ # ^
+ # (we are about to process arg(i)).
+ #
+ # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
+ # [arg0, ..., arg(i-1)] (any options and their arguments will have
+ # been removed from largs).
+ #
+ # The while loop will usually consume 1 or more arguments per pass.
+ # If it consumes 1 (eg. arg is an option that takes no arguments),
+ # then after _process_arg() is done the situation is:
+ #
+ # largs = subset of [arg0, ..., arg(i)]
+ # rargs = [arg(i+1), ..., arg(N-1)]
+ #
+ # If allow_interspersed_args is false, largs will always be
+ # *empty* -- still a subset of [arg0, ..., arg(i-1)], but
+ # not a very interesting subset!
+
+ def _match_long_opt(
+ self, opt: str, explicit_value: str | None, state: _ParsingState
+ ) -> None:
+ if opt not in self._long_opt:
+ from difflib import get_close_matches
+
+ possibilities = get_close_matches(opt, self._long_opt)
+ raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx)
+
+ option = self._long_opt[opt]
+ if option.takes_value:
+ # At this point it's safe to modify rargs by injecting the
+ # explicit value, because no exception is raised in this
+ # branch. This means that the inserted value will be fully
+ # consumed.
+ if explicit_value is not None:
+ state.rargs.insert(0, explicit_value)
+
+ value = self._get_value_from_state(opt, option, state)
+
+ elif explicit_value is not None:
+ raise BadOptionUsage(
+ opt, _("Option {name!r} does not take a value.").format(name=opt)
+ )
+
+ else:
+ value = UNSET
+
+ option.process(value, state)
+
+ def _match_short_opt(self, arg: str, state: _ParsingState) -> None:
+ stop = False
+ i = 1
+ prefix = arg[0]
+ unknown_options = []
+
+ for ch in arg[1:]:
+ opt = _normalize_opt(f"{prefix}{ch}", self.ctx)
+ option = self._short_opt.get(opt)
+ i += 1
+
+ if not option:
+ if self.ignore_unknown_options:
+ unknown_options.append(ch)
+ continue
+ raise NoSuchOption(opt, ctx=self.ctx)
+ if option.takes_value:
+ # Any characters left in arg? Pretend they're the
+ # next arg, and stop consuming characters of arg.
+ if i < len(arg):
+ state.rargs.insert(0, arg[i:])
+ stop = True
+
+ value = self._get_value_from_state(opt, option, state)
+
+ else:
+ value = UNSET
+
+ option.process(value, state)
+
+ if stop:
+ break
+
+ # If we got any unknown options we recombine the string of the
+ # remaining options and re-attach the prefix, then report that
+ # to the state as new larg. This way there is basic combinatorics
+ # that can be achieved while still ignoring unknown arguments.
+ if self.ignore_unknown_options and unknown_options:
+ state.largs.append(f"{prefix}{''.join(unknown_options)}")
+
+ def _get_value_from_state(
+ self, option_name: str, option: _Option, state: _ParsingState
+ ) -> str | cabc.Sequence[str] | T_FLAG_NEEDS_VALUE:
+ nargs = option.nargs
+
+ value: str | cabc.Sequence[str] | T_FLAG_NEEDS_VALUE
+
+ if len(state.rargs) < nargs:
+ if option.obj._flag_needs_value:
+ # Option allows omitting the value.
+ value = FLAG_NEEDS_VALUE
+ else:
+ raise BadOptionUsage(
+ option_name,
+ ngettext(
+ "Option {name!r} requires an argument.",
+ "Option {name!r} requires {nargs} arguments.",
+ nargs,
+ ).format(name=option_name, nargs=nargs),
+ )
+ elif nargs == 1:
+ next_rarg = state.rargs[0]
+
+ if (
+ option.obj._flag_needs_value
+ and isinstance(next_rarg, str)
+ and next_rarg[:1] in self._opt_prefixes
+ and len(next_rarg) > 1
+ ):
+ # The next arg looks like the start of an option, don't
+ # use it as the value if omitting the value is allowed.
+ value = FLAG_NEEDS_VALUE
+ else:
+ value = state.rargs.pop(0)
+ else:
+ value = tuple(state.rargs[:nargs])
+ del state.rargs[:nargs]
+
+ return value
+
+ def _process_opts(self, arg: str, state: _ParsingState) -> None:
+ explicit_value = None
+ # Long option handling happens in two parts. The first part is
+ # supporting explicitly attached values. In any case, we will try
+ # to long match the option first.
+ if "=" in arg:
+ long_opt, explicit_value = arg.split("=", 1)
+ else:
+ long_opt = arg
+ norm_long_opt = _normalize_opt(long_opt, self.ctx)
+
+ # At this point we will match the (assumed) long option through
+ # the long option matching code. Note that this allows options
+ # like "-foo" to be matched as long options.
+ try:
+ self._match_long_opt(norm_long_opt, explicit_value, state)
+ except NoSuchOption:
+ # At this point the long option matching failed, and we need
+ # to try with short options. However there is a special rule
+ # which says, that if we have a two character options prefix
+ # (applies to "--foo" for instance), we do not dispatch to the
+ # short option code and will instead raise the no option
+ # error.
+ if arg[:2] not in self._opt_prefixes:
+ self._match_short_opt(arg, state)
+ return
+
+ if not self.ignore_unknown_options:
+ raise
+
+ state.largs.append(arg)
+
+
+def __getattr__(name: str) -> object:
+ import warnings
+
+ if name in {
+ "OptionParser",
+ "Argument",
+ "Option",
+ "split_opt",
+ "normalize_opt",
+ "ParsingState",
+ }:
+ warnings.warn(
+ f"'parser.{name}' is deprecated and will be removed in Click 9.0."
+ " The old parser is available in 'optparse'.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return globals()[f"_{name}"]
+
+ if name == "split_arg_string":
+ from .shell_completion import split_arg_string
+
+ warnings.warn(
+ "Importing 'parser.split_arg_string' is deprecated, it will only be"
+ " available in 'shell_completion' in Click 9.0.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return split_arg_string
+
+ raise AttributeError(name)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/click/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/shell_completion.py b/stripe_config/.venv/lib/python3.12/site-packages/click/shell_completion.py
new file mode 100644
index 0000000..8f1564c
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/shell_completion.py
@@ -0,0 +1,667 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import os
+import re
+import typing as t
+from gettext import gettext as _
+
+from .core import Argument
+from .core import Command
+from .core import Context
+from .core import Group
+from .core import Option
+from .core import Parameter
+from .core import ParameterSource
+from .utils import echo
+
+
+def shell_complete(
+ cli: Command,
+ ctx_args: cabc.MutableMapping[str, t.Any],
+ prog_name: str,
+ complete_var: str,
+ instruction: str,
+) -> int:
+ """Perform shell completion for the given CLI program.
+
+ :param cli: Command being called.
+ :param ctx_args: Extra arguments to pass to
+ ``cli.make_context``.
+ :param prog_name: Name of the executable in the shell.
+ :param complete_var: Name of the environment variable that holds
+ the completion instruction.
+ :param instruction: Value of ``complete_var`` with the completion
+ instruction and shell, in the form ``instruction_shell``.
+ :return: Status code to exit with.
+ """
+ shell, _, instruction = instruction.partition("_")
+ comp_cls = get_completion_class(shell)
+
+ if comp_cls is None:
+ return 1
+
+ comp = comp_cls(cli, ctx_args, prog_name, complete_var)
+
+ if instruction == "source":
+ echo(comp.source())
+ return 0
+
+ if instruction == "complete":
+ echo(comp.complete())
+ return 0
+
+ return 1
+
+
+class CompletionItem:
+ """Represents a completion value and metadata about the value. The
+ default metadata is ``type`` to indicate special shell handling,
+ and ``help`` if a shell supports showing a help string next to the
+ value.
+
+ Arbitrary parameters can be passed when creating the object, and
+ accessed using ``item.attr``. If an attribute wasn't passed,
+ accessing it returns ``None``.
+
+ :param value: The completion suggestion.
+ :param type: Tells the shell script to provide special completion
+ support for the type. Click uses ``"dir"`` and ``"file"``.
+ :param help: String shown next to the value if supported.
+ :param kwargs: Arbitrary metadata. The built-in implementations
+ don't use this, but custom type completions paired with custom
+ shell support could use it.
+ """
+
+ __slots__ = ("value", "type", "help", "_info")
+
+ def __init__(
+ self,
+ value: t.Any,
+ type: str = "plain",
+ help: str | None = None,
+ **kwargs: t.Any,
+ ) -> None:
+ self.value: t.Any = value
+ self.type: str = type
+ self.help: str | None = help
+ self._info = kwargs
+
+ def __getattr__(self, name: str) -> t.Any:
+ return self._info.get(name)
+
+
+# Only Bash >= 4.4 has the nosort option.
+_SOURCE_BASH = """\
+%(complete_func)s() {
+ local IFS=$'\\n'
+ local response
+
+ response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \
+%(complete_var)s=bash_complete $1)
+
+ for completion in $response; do
+ IFS=',' read type value <<< "$completion"
+
+ if [[ $type == 'dir' ]]; then
+ COMPREPLY=()
+ compopt -o dirnames
+ elif [[ $type == 'file' ]]; then
+ COMPREPLY=()
+ compopt -o default
+ elif [[ $type == 'plain' ]]; then
+ COMPREPLY+=($value)
+ fi
+ done
+
+ return 0
+}
+
+%(complete_func)s_setup() {
+ complete -o nosort -F %(complete_func)s %(prog_name)s
+}
+
+%(complete_func)s_setup;
+"""
+
+# See ZshComplete.format_completion below, and issue #2703, before
+# changing this script.
+#
+# (TL;DR: _describe is picky about the format, but this Zsh script snippet
+# is already widely deployed. So freeze this script, and use clever-ish
+# handling of colons in ZshComplet.format_completion.)
+_SOURCE_ZSH = """\
+#compdef %(prog_name)s
+
+%(complete_func)s() {
+ local -a completions
+ local -a completions_with_descriptions
+ local -a response
+ (( ! $+commands[%(prog_name)s] )) && return 1
+
+ response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \
+%(complete_var)s=zsh_complete %(prog_name)s)}")
+
+ for type key descr in ${response}; do
+ if [[ "$type" == "plain" ]]; then
+ if [[ "$descr" == "_" ]]; then
+ completions+=("$key")
+ else
+ completions_with_descriptions+=("$key":"$descr")
+ fi
+ elif [[ "$type" == "dir" ]]; then
+ _path_files -/
+ elif [[ "$type" == "file" ]]; then
+ _path_files -f
+ fi
+ done
+
+ if [ -n "$completions_with_descriptions" ]; then
+ _describe -V unsorted completions_with_descriptions -U
+ fi
+
+ if [ -n "$completions" ]; then
+ compadd -U -V unsorted -a completions
+ fi
+}
+
+if [[ $zsh_eval_context[-1] == loadautofunc ]]; then
+ # autoload from fpath, call function directly
+ %(complete_func)s "$@"
+else
+ # eval/source/. command, register function for later
+ compdef %(complete_func)s %(prog_name)s
+fi
+"""
+
+_SOURCE_FISH = """\
+function %(complete_func)s;
+ set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \
+COMP_CWORD=(commandline -t) %(prog_name)s);
+
+ for completion in $response;
+ set -l metadata (string split "," $completion);
+
+ if test $metadata[1] = "dir";
+ __fish_complete_directories $metadata[2];
+ else if test $metadata[1] = "file";
+ __fish_complete_path $metadata[2];
+ else if test $metadata[1] = "plain";
+ echo $metadata[2];
+ end;
+ end;
+end;
+
+complete --no-files --command %(prog_name)s --arguments \
+"(%(complete_func)s)";
+"""
+
+
+class ShellComplete:
+ """Base class for providing shell completion support. A subclass for
+ a given shell will override attributes and methods to implement the
+ completion instructions (``source`` and ``complete``).
+
+ :param cli: Command being called.
+ :param prog_name: Name of the executable in the shell.
+ :param complete_var: Name of the environment variable that holds
+ the completion instruction.
+
+ .. versionadded:: 8.0
+ """
+
+ name: t.ClassVar[str]
+ """Name to register the shell as with :func:`add_completion_class`.
+ This is used in completion instructions (``{name}_source`` and
+ ``{name}_complete``).
+ """
+
+ source_template: t.ClassVar[str]
+ """Completion script template formatted by :meth:`source`. This must
+ be provided by subclasses.
+ """
+
+ def __init__(
+ self,
+ cli: Command,
+ ctx_args: cabc.MutableMapping[str, t.Any],
+ prog_name: str,
+ complete_var: str,
+ ) -> None:
+ self.cli = cli
+ self.ctx_args = ctx_args
+ self.prog_name = prog_name
+ self.complete_var = complete_var
+
+ @property
+ def func_name(self) -> str:
+ """The name of the shell function defined by the completion
+ script.
+ """
+ safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), flags=re.ASCII)
+ return f"_{safe_name}_completion"
+
+ def source_vars(self) -> dict[str, t.Any]:
+ """Vars for formatting :attr:`source_template`.
+
+ By default this provides ``complete_func``, ``complete_var``,
+ and ``prog_name``.
+ """
+ return {
+ "complete_func": self.func_name,
+ "complete_var": self.complete_var,
+ "prog_name": self.prog_name,
+ }
+
+ def source(self) -> str:
+ """Produce the shell script that defines the completion
+ function. By default this ``%``-style formats
+ :attr:`source_template` with the dict returned by
+ :meth:`source_vars`.
+ """
+ return self.source_template % self.source_vars()
+
+ def get_completion_args(self) -> tuple[list[str], str]:
+ """Use the env vars defined by the shell script to return a
+ tuple of ``args, incomplete``. This must be implemented by
+ subclasses.
+ """
+ raise NotImplementedError
+
+ def get_completions(self, args: list[str], incomplete: str) -> list[CompletionItem]:
+ """Determine the context and last complete command or parameter
+ from the complete args. Call that object's ``shell_complete``
+ method to get the completions for the incomplete value.
+
+ :param args: List of complete args before the incomplete value.
+ :param incomplete: Value being completed. May be empty.
+ """
+ ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args)
+ obj, incomplete = _resolve_incomplete(ctx, args, incomplete)
+ return obj.shell_complete(ctx, incomplete)
+
+ def format_completion(self, item: CompletionItem) -> str:
+ """Format a completion item into the form recognized by the
+ shell script. This must be implemented by subclasses.
+
+ :param item: Completion item to format.
+ """
+ raise NotImplementedError
+
+ def complete(self) -> str:
+ """Produce the completion data to send back to the shell.
+
+ By default this calls :meth:`get_completion_args`, gets the
+ completions, then calls :meth:`format_completion` for each
+ completion.
+ """
+ args, incomplete = self.get_completion_args()
+ completions = self.get_completions(args, incomplete)
+ out = [self.format_completion(item) for item in completions]
+ return "\n".join(out)
+
+
+class BashComplete(ShellComplete):
+ """Shell completion for Bash."""
+
+ name = "bash"
+ source_template = _SOURCE_BASH
+
+ @staticmethod
+ def _check_version() -> None:
+ import shutil
+ import subprocess
+
+ bash_exe = shutil.which("bash")
+
+ if bash_exe is None:
+ match = None
+ else:
+ output = subprocess.run(
+ [bash_exe, "--norc", "-c", 'echo "${BASH_VERSION}"'],
+ stdout=subprocess.PIPE,
+ )
+ match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode())
+
+ if match is not None:
+ major, minor = match.groups()
+
+ if major < "4" or major == "4" and minor < "4":
+ echo(
+ _(
+ "Shell completion is not supported for Bash"
+ " versions older than 4.4."
+ ),
+ err=True,
+ )
+ else:
+ echo(
+ _("Couldn't detect Bash version, shell completion is not supported."),
+ err=True,
+ )
+
+ def source(self) -> str:
+ self._check_version()
+ return super().source()
+
+ def get_completion_args(self) -> tuple[list[str], str]:
+ cwords = split_arg_string(os.environ["COMP_WORDS"])
+ cword = int(os.environ["COMP_CWORD"])
+ args = cwords[1:cword]
+
+ try:
+ incomplete = cwords[cword]
+ except IndexError:
+ incomplete = ""
+
+ return args, incomplete
+
+ def format_completion(self, item: CompletionItem) -> str:
+ return f"{item.type},{item.value}"
+
+
+class ZshComplete(ShellComplete):
+ """Shell completion for Zsh."""
+
+ name = "zsh"
+ source_template = _SOURCE_ZSH
+
+ def get_completion_args(self) -> tuple[list[str], str]:
+ cwords = split_arg_string(os.environ["COMP_WORDS"])
+ cword = int(os.environ["COMP_CWORD"])
+ args = cwords[1:cword]
+
+ try:
+ incomplete = cwords[cword]
+ except IndexError:
+ incomplete = ""
+
+ return args, incomplete
+
+ def format_completion(self, item: CompletionItem) -> str:
+ help_ = item.help or "_"
+ # The zsh completion script uses `_describe` on items with help
+ # texts (which splits the item help from the item value at the
+ # first unescaped colon) and `compadd` on items without help
+ # text (which uses the item value as-is and does not support
+ # colon escaping). So escape colons in the item value if and
+ # only if the item help is not the sentinel "_" value, as used
+ # by the completion script.
+ #
+ # (The zsh completion script is potentially widely deployed, and
+ # thus harder to fix than this method.)
+ #
+ # See issue #1812 and issue #2703 for further context.
+ value = item.value.replace(":", r"\:") if help_ != "_" else item.value
+ return f"{item.type}\n{value}\n{help_}"
+
+
+class FishComplete(ShellComplete):
+ """Shell completion for Fish."""
+
+ name = "fish"
+ source_template = _SOURCE_FISH
+
+ def get_completion_args(self) -> tuple[list[str], str]:
+ cwords = split_arg_string(os.environ["COMP_WORDS"])
+ incomplete = os.environ["COMP_CWORD"]
+ if incomplete:
+ incomplete = split_arg_string(incomplete)[0]
+ args = cwords[1:]
+
+ # Fish stores the partial word in both COMP_WORDS and
+ # COMP_CWORD, remove it from complete args.
+ if incomplete and args and args[-1] == incomplete:
+ args.pop()
+
+ return args, incomplete
+
+ def format_completion(self, item: CompletionItem) -> str:
+ if item.help:
+ return f"{item.type},{item.value}\t{item.help}"
+
+ return f"{item.type},{item.value}"
+
+
+ShellCompleteType = t.TypeVar("ShellCompleteType", bound="type[ShellComplete]")
+
+
+_available_shells: dict[str, type[ShellComplete]] = {
+ "bash": BashComplete,
+ "fish": FishComplete,
+ "zsh": ZshComplete,
+}
+
+
+def add_completion_class(
+ cls: ShellCompleteType, name: str | None = None
+) -> ShellCompleteType:
+ """Register a :class:`ShellComplete` subclass under the given name.
+ The name will be provided by the completion instruction environment
+ variable during completion.
+
+ :param cls: The completion class that will handle completion for the
+ shell.
+ :param name: Name to register the class under. Defaults to the
+ class's ``name`` attribute.
+ """
+ if name is None:
+ name = cls.name
+
+ _available_shells[name] = cls
+
+ return cls
+
+
+def get_completion_class(shell: str) -> type[ShellComplete] | None:
+ """Look up a registered :class:`ShellComplete` subclass by the name
+ provided by the completion instruction environment variable. If the
+ name isn't registered, returns ``None``.
+
+ :param shell: Name the class is registered under.
+ """
+ return _available_shells.get(shell)
+
+
+def split_arg_string(string: str) -> list[str]:
+ """Split an argument string as with :func:`shlex.split`, but don't
+ fail if the string is incomplete. Ignores a missing closing quote or
+ incomplete escape sequence and uses the partial token as-is.
+
+ .. code-block:: python
+
+ split_arg_string("example 'my file")
+ ["example", "my file"]
+
+ split_arg_string("example my\\")
+ ["example", "my"]
+
+ :param string: String to split.
+
+ .. versionchanged:: 8.2
+ Moved to ``shell_completion`` from ``parser``.
+ """
+ import shlex
+
+ lex = shlex.shlex(string, posix=True)
+ lex.whitespace_split = True
+ lex.commenters = ""
+ out = []
+
+ try:
+ for token in lex:
+ out.append(token)
+ except ValueError:
+ # Raised when end-of-string is reached in an invalid state. Use
+ # the partial token as-is. The quote or escape character is in
+ # lex.state, not lex.token.
+ out.append(lex.token)
+
+ return out
+
+
+def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool:
+ """Determine if the given parameter is an argument that can still
+ accept values.
+
+ :param ctx: Invocation context for the command represented by the
+ parsed complete args.
+ :param param: Argument object being checked.
+ """
+ if not isinstance(param, Argument):
+ return False
+
+ assert param.name is not None
+ # Will be None if expose_value is False.
+ value = ctx.params.get(param.name)
+ return (
+ param.nargs == -1
+ or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE
+ or (
+ param.nargs > 1
+ and isinstance(value, (tuple, list))
+ and len(value) < param.nargs
+ )
+ )
+
+
+def _start_of_option(ctx: Context, value: str) -> bool:
+ """Check if the value looks like the start of an option."""
+ if not value:
+ return False
+
+ c = value[0]
+ return c in ctx._opt_prefixes
+
+
+def _is_incomplete_option(ctx: Context, args: list[str], param: Parameter) -> bool:
+ """Determine if the given parameter is an option that needs a value.
+
+ :param args: List of complete args before the incomplete value.
+ :param param: Option object being checked.
+ """
+ if not isinstance(param, Option):
+ return False
+
+ if param.is_flag or param.count:
+ return False
+
+ last_option = None
+
+ for index, arg in enumerate(reversed(args)):
+ if index + 1 > param.nargs:
+ break
+
+ if _start_of_option(ctx, arg):
+ last_option = arg
+ break
+
+ return last_option is not None and last_option in param.opts
+
+
+def _resolve_context(
+ cli: Command,
+ ctx_args: cabc.MutableMapping[str, t.Any],
+ prog_name: str,
+ args: list[str],
+) -> Context:
+ """Produce the context hierarchy starting with the command and
+ traversing the complete arguments. This only follows the commands,
+ it doesn't trigger input prompts or callbacks.
+
+ :param cli: Command being called.
+ :param prog_name: Name of the executable in the shell.
+ :param args: List of complete args before the incomplete value.
+ """
+ ctx_args["resilient_parsing"] = True
+ with cli.make_context(prog_name, args.copy(), **ctx_args) as ctx:
+ args = ctx._protected_args + ctx.args
+
+ while args:
+ command = ctx.command
+
+ if isinstance(command, Group):
+ if not command.chain:
+ name, cmd, args = command.resolve_command(ctx, args)
+
+ if cmd is None:
+ return ctx
+
+ with cmd.make_context(
+ name, args, parent=ctx, resilient_parsing=True
+ ) as sub_ctx:
+ ctx = sub_ctx
+ args = ctx._protected_args + ctx.args
+ else:
+ sub_ctx = ctx
+
+ while args:
+ name, cmd, args = command.resolve_command(ctx, args)
+
+ if cmd is None:
+ return ctx
+
+ with cmd.make_context(
+ name,
+ args,
+ parent=ctx,
+ allow_extra_args=True,
+ allow_interspersed_args=False,
+ resilient_parsing=True,
+ ) as sub_sub_ctx:
+ sub_ctx = sub_sub_ctx
+ args = sub_ctx.args
+
+ ctx = sub_ctx
+ args = [*sub_ctx._protected_args, *sub_ctx.args]
+ else:
+ break
+
+ return ctx
+
+
+def _resolve_incomplete(
+ ctx: Context, args: list[str], incomplete: str
+) -> tuple[Command | Parameter, str]:
+ """Find the Click object that will handle the completion of the
+ incomplete value. Return the object and the incomplete value.
+
+ :param ctx: Invocation context for the command represented by
+ the parsed complete args.
+ :param args: List of complete args before the incomplete value.
+ :param incomplete: Value being completed. May be empty.
+ """
+ # Different shells treat an "=" between a long option name and
+ # value differently. Might keep the value joined, return the "="
+ # as a separate item, or return the split name and value. Always
+ # split and discard the "=" to make completion easier.
+ if incomplete == "=":
+ incomplete = ""
+ elif "=" in incomplete and _start_of_option(ctx, incomplete):
+ name, _, incomplete = incomplete.partition("=")
+ args.append(name)
+
+ # The "--" marker tells Click to stop treating values as options
+ # even if they start with the option character. If it hasn't been
+ # given and the incomplete arg looks like an option, the current
+ # command will provide option name completions.
+ if "--" not in args and _start_of_option(ctx, incomplete):
+ return ctx.command, incomplete
+
+ params = ctx.command.get_params(ctx)
+
+ # If the last complete arg is an option name with an incomplete
+ # value, the option will provide value completions.
+ for param in params:
+ if _is_incomplete_option(ctx, args, param):
+ return param, incomplete
+
+ # It's not an option name or value. The first argument without a
+ # parsed value will provide value completions.
+ for param in params:
+ if _is_incomplete_argument(ctx, param):
+ return param, incomplete
+
+ # There were no unparsed arguments, the command may be a group that
+ # will provide command name completions.
+ return ctx.command, incomplete
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/termui.py b/stripe_config/.venv/lib/python3.12/site-packages/click/termui.py
new file mode 100644
index 0000000..dcbb222
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/termui.py
@@ -0,0 +1,877 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import inspect
+import io
+import itertools
+import sys
+import typing as t
+from contextlib import AbstractContextManager
+from gettext import gettext as _
+
+from ._compat import isatty
+from ._compat import strip_ansi
+from .exceptions import Abort
+from .exceptions import UsageError
+from .globals import resolve_color_default
+from .types import Choice
+from .types import convert_type
+from .types import ParamType
+from .utils import echo
+from .utils import LazyFile
+
+if t.TYPE_CHECKING:
+ from ._termui_impl import ProgressBar
+
+V = t.TypeVar("V")
+
+# The prompt functions to use. The doc tools currently override these
+# functions to customize how they work.
+visible_prompt_func: t.Callable[[str], str] = input
+
+_ansi_colors = {
+ "black": 30,
+ "red": 31,
+ "green": 32,
+ "yellow": 33,
+ "blue": 34,
+ "magenta": 35,
+ "cyan": 36,
+ "white": 37,
+ "reset": 39,
+ "bright_black": 90,
+ "bright_red": 91,
+ "bright_green": 92,
+ "bright_yellow": 93,
+ "bright_blue": 94,
+ "bright_magenta": 95,
+ "bright_cyan": 96,
+ "bright_white": 97,
+}
+_ansi_reset_all = "\033[0m"
+
+
+def hidden_prompt_func(prompt: str) -> str:
+ import getpass
+
+ return getpass.getpass(prompt)
+
+
+def _build_prompt(
+ text: str,
+ suffix: str,
+ show_default: bool = False,
+ default: t.Any | None = None,
+ show_choices: bool = True,
+ type: ParamType | None = None,
+) -> str:
+ prompt = text
+ if type is not None and show_choices and isinstance(type, Choice):
+ prompt += f" ({', '.join(map(str, type.choices))})"
+ if default is not None and show_default:
+ prompt = f"{prompt} [{_format_default(default)}]"
+ return f"{prompt}{suffix}"
+
+
+def _format_default(default: t.Any) -> t.Any:
+ if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"):
+ return default.name
+
+ return default
+
+
+def prompt(
+ text: str,
+ default: t.Any | None = None,
+ hide_input: bool = False,
+ confirmation_prompt: bool | str = False,
+ type: ParamType | t.Any | None = None,
+ value_proc: t.Callable[[str], t.Any] | None = None,
+ prompt_suffix: str = ": ",
+ show_default: bool = True,
+ err: bool = False,
+ show_choices: bool = True,
+) -> t.Any:
+ """Prompts a user for input. This is a convenience function that can
+ be used to prompt a user for input later.
+
+ If the user aborts the input by sending an interrupt signal, this
+ function will catch it and raise a :exc:`Abort` exception.
+
+ :param text: the text to show for the prompt.
+ :param default: the default value to use if no input happens. If this
+ is not given it will prompt until it's aborted.
+ :param hide_input: if this is set to true then the input value will
+ be hidden.
+ :param confirmation_prompt: Prompt a second time to confirm the
+ value. Can be set to a string instead of ``True`` to customize
+ the message.
+ :param type: the type to use to check the value against.
+ :param value_proc: if this parameter is provided it's a function that
+ is invoked instead of the type conversion to
+ convert a value.
+ :param prompt_suffix: a suffix that should be added to the prompt.
+ :param show_default: shows or hides the default value in the prompt.
+ :param err: if set to true the file defaults to ``stderr`` instead of
+ ``stdout``, the same as with echo.
+ :param show_choices: Show or hide choices if the passed type is a Choice.
+ For example if type is a Choice of either day or week,
+ show_choices is true and text is "Group by" then the
+ prompt will be "Group by (day, week): ".
+
+ .. versionadded:: 8.0
+ ``confirmation_prompt`` can be a custom string.
+
+ .. versionadded:: 7.0
+ Added the ``show_choices`` parameter.
+
+ .. versionadded:: 6.0
+ Added unicode support for cmd.exe on Windows.
+
+ .. versionadded:: 4.0
+ Added the `err` parameter.
+
+ """
+
+ def prompt_func(text: str) -> str:
+ f = hidden_prompt_func if hide_input else visible_prompt_func
+ try:
+ # Write the prompt separately so that we get nice
+ # coloring through colorama on Windows
+ echo(text.rstrip(" "), nl=False, err=err)
+ # Echo a space to stdout to work around an issue where
+ # readline causes backspace to clear the whole line.
+ return f(" ")
+ except (KeyboardInterrupt, EOFError):
+ # getpass doesn't print a newline if the user aborts input with ^C.
+ # Allegedly this behavior is inherited from getpass(3).
+ # A doc bug has been filed at https://bugs.python.org/issue24711
+ if hide_input:
+ echo(None, err=err)
+ raise Abort() from None
+
+ if value_proc is None:
+ value_proc = convert_type(type, default)
+
+ prompt = _build_prompt(
+ text, prompt_suffix, show_default, default, show_choices, type
+ )
+
+ if confirmation_prompt:
+ if confirmation_prompt is True:
+ confirmation_prompt = _("Repeat for confirmation")
+
+ confirmation_prompt = _build_prompt(confirmation_prompt, prompt_suffix)
+
+ while True:
+ while True:
+ value = prompt_func(prompt)
+ if value:
+ break
+ elif default is not None:
+ value = default
+ break
+ try:
+ result = value_proc(value)
+ except UsageError as e:
+ if hide_input:
+ echo(_("Error: The value you entered was invalid."), err=err)
+ else:
+ echo(_("Error: {e.message}").format(e=e), err=err)
+ continue
+ if not confirmation_prompt:
+ return result
+ while True:
+ value2 = prompt_func(confirmation_prompt)
+ is_empty = not value and not value2
+ if value2 or is_empty:
+ break
+ if value == value2:
+ return result
+ echo(_("Error: The two entered values do not match."), err=err)
+
+
+def confirm(
+ text: str,
+ default: bool | None = False,
+ abort: bool = False,
+ prompt_suffix: str = ": ",
+ show_default: bool = True,
+ err: bool = False,
+) -> bool:
+ """Prompts for confirmation (yes/no question).
+
+ If the user aborts the input by sending a interrupt signal this
+ function will catch it and raise a :exc:`Abort` exception.
+
+ :param text: the question to ask.
+ :param default: The default value to use when no input is given. If
+ ``None``, repeat until input is given.
+ :param abort: if this is set to `True` a negative answer aborts the
+ exception by raising :exc:`Abort`.
+ :param prompt_suffix: a suffix that should be added to the prompt.
+ :param show_default: shows or hides the default value in the prompt.
+ :param err: if set to true the file defaults to ``stderr`` instead of
+ ``stdout``, the same as with echo.
+
+ .. versionchanged:: 8.0
+ Repeat until input is given if ``default`` is ``None``.
+
+ .. versionadded:: 4.0
+ Added the ``err`` parameter.
+ """
+ prompt = _build_prompt(
+ text,
+ prompt_suffix,
+ show_default,
+ "y/n" if default is None else ("Y/n" if default else "y/N"),
+ )
+
+ while True:
+ try:
+ # Write the prompt separately so that we get nice
+ # coloring through colorama on Windows
+ echo(prompt.rstrip(" "), nl=False, err=err)
+ # Echo a space to stdout to work around an issue where
+ # readline causes backspace to clear the whole line.
+ value = visible_prompt_func(" ").lower().strip()
+ except (KeyboardInterrupt, EOFError):
+ raise Abort() from None
+ if value in ("y", "yes"):
+ rv = True
+ elif value in ("n", "no"):
+ rv = False
+ elif default is not None and value == "":
+ rv = default
+ else:
+ echo(_("Error: invalid input"), err=err)
+ continue
+ break
+ if abort and not rv:
+ raise Abort()
+ return rv
+
+
+def echo_via_pager(
+ text_or_generator: cabc.Iterable[str] | t.Callable[[], cabc.Iterable[str]] | str,
+ color: bool | None = None,
+) -> None:
+ """This function takes a text and shows it via an environment specific
+ pager on stdout.
+
+ .. versionchanged:: 3.0
+ Added the `color` flag.
+
+ :param text_or_generator: the text to page, or alternatively, a
+ generator emitting the text to page.
+ :param color: controls if the pager supports ANSI colors or not. The
+ default is autodetection.
+ """
+ color = resolve_color_default(color)
+
+ if inspect.isgeneratorfunction(text_or_generator):
+ i = t.cast("t.Callable[[], cabc.Iterable[str]]", text_or_generator)()
+ elif isinstance(text_or_generator, str):
+ i = [text_or_generator]
+ else:
+ i = iter(t.cast("cabc.Iterable[str]", text_or_generator))
+
+ # convert every element of i to a text type if necessary
+ text_generator = (el if isinstance(el, str) else str(el) for el in i)
+
+ from ._termui_impl import pager
+
+ return pager(itertools.chain(text_generator, "\n"), color)
+
+
+@t.overload
+def progressbar(
+ *,
+ length: int,
+ label: str | None = None,
+ hidden: bool = False,
+ show_eta: bool = True,
+ show_percent: bool | None = None,
+ show_pos: bool = False,
+ fill_char: str = "#",
+ empty_char: str = "-",
+ bar_template: str = "%(label)s [%(bar)s] %(info)s",
+ info_sep: str = " ",
+ width: int = 36,
+ file: t.TextIO | None = None,
+ color: bool | None = None,
+ update_min_steps: int = 1,
+) -> ProgressBar[int]: ...
+
+
+@t.overload
+def progressbar(
+ iterable: cabc.Iterable[V] | None = None,
+ length: int | None = None,
+ label: str | None = None,
+ hidden: bool = False,
+ show_eta: bool = True,
+ show_percent: bool | None = None,
+ show_pos: bool = False,
+ item_show_func: t.Callable[[V | None], str | None] | None = None,
+ fill_char: str = "#",
+ empty_char: str = "-",
+ bar_template: str = "%(label)s [%(bar)s] %(info)s",
+ info_sep: str = " ",
+ width: int = 36,
+ file: t.TextIO | None = None,
+ color: bool | None = None,
+ update_min_steps: int = 1,
+) -> ProgressBar[V]: ...
+
+
+def progressbar(
+ iterable: cabc.Iterable[V] | None = None,
+ length: int | None = None,
+ label: str | None = None,
+ hidden: bool = False,
+ show_eta: bool = True,
+ show_percent: bool | None = None,
+ show_pos: bool = False,
+ item_show_func: t.Callable[[V | None], str | None] | None = None,
+ fill_char: str = "#",
+ empty_char: str = "-",
+ bar_template: str = "%(label)s [%(bar)s] %(info)s",
+ info_sep: str = " ",
+ width: int = 36,
+ file: t.TextIO | None = None,
+ color: bool | None = None,
+ update_min_steps: int = 1,
+) -> ProgressBar[V]:
+ """This function creates an iterable context manager that can be used
+ to iterate over something while showing a progress bar. It will
+ either iterate over the `iterable` or `length` items (that are counted
+ up). While iteration happens, this function will print a rendered
+ progress bar to the given `file` (defaults to stdout) and will attempt
+ to calculate remaining time and more. By default, this progress bar
+ will not be rendered if the file is not a terminal.
+
+ The context manager creates the progress bar. When the context
+ manager is entered the progress bar is already created. With every
+ iteration over the progress bar, the iterable passed to the bar is
+ advanced and the bar is updated. When the context manager exits,
+ a newline is printed and the progress bar is finalized on screen.
+
+ Note: The progress bar is currently designed for use cases where the
+ total progress can be expected to take at least several seconds.
+ Because of this, the ProgressBar class object won't display
+ progress that is considered too fast, and progress where the time
+ between steps is less than a second.
+
+ No printing must happen or the progress bar will be unintentionally
+ destroyed.
+
+ Example usage::
+
+ with progressbar(items) as bar:
+ for item in bar:
+ do_something_with(item)
+
+ Alternatively, if no iterable is specified, one can manually update the
+ progress bar through the `update()` method instead of directly
+ iterating over the progress bar. The update method accepts the number
+ of steps to increment the bar with::
+
+ with progressbar(length=chunks.total_bytes) as bar:
+ for chunk in chunks:
+ process_chunk(chunk)
+ bar.update(chunks.bytes)
+
+ The ``update()`` method also takes an optional value specifying the
+ ``current_item`` at the new position. This is useful when used
+ together with ``item_show_func`` to customize the output for each
+ manual step::
+
+ with click.progressbar(
+ length=total_size,
+ label='Unzipping archive',
+ item_show_func=lambda a: a.filename
+ ) as bar:
+ for archive in zip_file:
+ archive.extract()
+ bar.update(archive.size, archive)
+
+ :param iterable: an iterable to iterate over. If not provided the length
+ is required.
+ :param length: the number of items to iterate over. By default the
+ progressbar will attempt to ask the iterator about its
+ length, which might or might not work. If an iterable is
+ also provided this parameter can be used to override the
+ length. If an iterable is not provided the progress bar
+ will iterate over a range of that length.
+ :param label: the label to show next to the progress bar.
+ :param hidden: hide the progressbar. Defaults to ``False``. When no tty is
+ detected, it will only print the progressbar label. Setting this to
+ ``False`` also disables that.
+ :param show_eta: enables or disables the estimated time display. This is
+ automatically disabled if the length cannot be
+ determined.
+ :param show_percent: enables or disables the percentage display. The
+ default is `True` if the iterable has a length or
+ `False` if not.
+ :param show_pos: enables or disables the absolute position display. The
+ default is `False`.
+ :param item_show_func: A function called with the current item which
+ can return a string to show next to the progress bar. If the
+ function returns ``None`` nothing is shown. The current item can
+ be ``None``, such as when entering and exiting the bar.
+ :param fill_char: the character to use to show the filled part of the
+ progress bar.
+ :param empty_char: the character to use to show the non-filled part of
+ the progress bar.
+ :param bar_template: the format string to use as template for the bar.
+ The parameters in it are ``label`` for the label,
+ ``bar`` for the progress bar and ``info`` for the
+ info section.
+ :param info_sep: the separator between multiple info items (eta etc.)
+ :param width: the width of the progress bar in characters, 0 means full
+ terminal width
+ :param file: The file to write to. If this is not a terminal then
+ only the label is printed.
+ :param color: controls if the terminal supports ANSI colors or not. The
+ default is autodetection. This is only needed if ANSI
+ codes are included anywhere in the progress bar output
+ which is not the case by default.
+ :param update_min_steps: Render only when this many updates have
+ completed. This allows tuning for very fast iterators.
+
+ .. versionadded:: 8.2
+ The ``hidden`` argument.
+
+ .. versionchanged:: 8.0
+ Output is shown even if execution time is less than 0.5 seconds.
+
+ .. versionchanged:: 8.0
+ ``item_show_func`` shows the current item, not the previous one.
+
+ .. versionchanged:: 8.0
+ Labels are echoed if the output is not a TTY. Reverts a change
+ in 7.0 that removed all output.
+
+ .. versionadded:: 8.0
+ The ``update_min_steps`` parameter.
+
+ .. versionadded:: 4.0
+ The ``color`` parameter and ``update`` method.
+
+ .. versionadded:: 2.0
+ """
+ from ._termui_impl import ProgressBar
+
+ color = resolve_color_default(color)
+ return ProgressBar(
+ iterable=iterable,
+ length=length,
+ hidden=hidden,
+ show_eta=show_eta,
+ show_percent=show_percent,
+ show_pos=show_pos,
+ item_show_func=item_show_func,
+ fill_char=fill_char,
+ empty_char=empty_char,
+ bar_template=bar_template,
+ info_sep=info_sep,
+ file=file,
+ label=label,
+ width=width,
+ color=color,
+ update_min_steps=update_min_steps,
+ )
+
+
+def clear() -> None:
+ """Clears the terminal screen. This will have the effect of clearing
+ the whole visible space of the terminal and moving the cursor to the
+ top left. This does not do anything if not connected to a terminal.
+
+ .. versionadded:: 2.0
+ """
+ if not isatty(sys.stdout):
+ return
+
+ # ANSI escape \033[2J clears the screen, \033[1;1H moves the cursor
+ echo("\033[2J\033[1;1H", nl=False)
+
+
+def _interpret_color(color: int | tuple[int, int, int] | str, offset: int = 0) -> str:
+ if isinstance(color, int):
+ return f"{38 + offset};5;{color:d}"
+
+ if isinstance(color, (tuple, list)):
+ r, g, b = color
+ return f"{38 + offset};2;{r:d};{g:d};{b:d}"
+
+ return str(_ansi_colors[color] + offset)
+
+
+def style(
+ text: t.Any,
+ fg: int | tuple[int, int, int] | str | None = None,
+ bg: int | tuple[int, int, int] | str | None = None,
+ bold: bool | None = None,
+ dim: bool | None = None,
+ underline: bool | None = None,
+ overline: bool | None = None,
+ italic: bool | None = None,
+ blink: bool | None = None,
+ reverse: bool | None = None,
+ strikethrough: bool | None = None,
+ reset: bool = True,
+) -> str:
+ """Styles a text with ANSI styles and returns the new string. By
+ default the styling is self contained which means that at the end
+ of the string a reset code is issued. This can be prevented by
+ passing ``reset=False``.
+
+ Examples::
+
+ click.echo(click.style('Hello World!', fg='green'))
+ click.echo(click.style('ATTENTION!', blink=True))
+ click.echo(click.style('Some things', reverse=True, fg='cyan'))
+ click.echo(click.style('More colors', fg=(255, 12, 128), bg=117))
+
+ Supported color names:
+
+ * ``black`` (might be a gray)
+ * ``red``
+ * ``green``
+ * ``yellow`` (might be an orange)
+ * ``blue``
+ * ``magenta``
+ * ``cyan``
+ * ``white`` (might be light gray)
+ * ``bright_black``
+ * ``bright_red``
+ * ``bright_green``
+ * ``bright_yellow``
+ * ``bright_blue``
+ * ``bright_magenta``
+ * ``bright_cyan``
+ * ``bright_white``
+ * ``reset`` (reset the color code only)
+
+ If the terminal supports it, color may also be specified as:
+
+ - An integer in the interval [0, 255]. The terminal must support
+ 8-bit/256-color mode.
+ - An RGB tuple of three integers in [0, 255]. The terminal must
+ support 24-bit/true-color mode.
+
+ See https://en.wikipedia.org/wiki/ANSI_color and
+ https://gist.github.com/XVilka/8346728 for more information.
+
+ :param text: the string to style with ansi codes.
+ :param fg: if provided this will become the foreground color.
+ :param bg: if provided this will become the background color.
+ :param bold: if provided this will enable or disable bold mode.
+ :param dim: if provided this will enable or disable dim mode. This is
+ badly supported.
+ :param underline: if provided this will enable or disable underline.
+ :param overline: if provided this will enable or disable overline.
+ :param italic: if provided this will enable or disable italic.
+ :param blink: if provided this will enable or disable blinking.
+ :param reverse: if provided this will enable or disable inverse
+ rendering (foreground becomes background and the
+ other way round).
+ :param strikethrough: if provided this will enable or disable
+ striking through text.
+ :param reset: by default a reset-all code is added at the end of the
+ string which means that styles do not carry over. This
+ can be disabled to compose styles.
+
+ .. versionchanged:: 8.0
+ A non-string ``message`` is converted to a string.
+
+ .. versionchanged:: 8.0
+ Added support for 256 and RGB color codes.
+
+ .. versionchanged:: 8.0
+ Added the ``strikethrough``, ``italic``, and ``overline``
+ parameters.
+
+ .. versionchanged:: 7.0
+ Added support for bright colors.
+
+ .. versionadded:: 2.0
+ """
+ if not isinstance(text, str):
+ text = str(text)
+
+ bits = []
+
+ if fg:
+ try:
+ bits.append(f"\033[{_interpret_color(fg)}m")
+ except KeyError:
+ raise TypeError(f"Unknown color {fg!r}") from None
+
+ if bg:
+ try:
+ bits.append(f"\033[{_interpret_color(bg, 10)}m")
+ except KeyError:
+ raise TypeError(f"Unknown color {bg!r}") from None
+
+ if bold is not None:
+ bits.append(f"\033[{1 if bold else 22}m")
+ if dim is not None:
+ bits.append(f"\033[{2 if dim else 22}m")
+ if underline is not None:
+ bits.append(f"\033[{4 if underline else 24}m")
+ if overline is not None:
+ bits.append(f"\033[{53 if overline else 55}m")
+ if italic is not None:
+ bits.append(f"\033[{3 if italic else 23}m")
+ if blink is not None:
+ bits.append(f"\033[{5 if blink else 25}m")
+ if reverse is not None:
+ bits.append(f"\033[{7 if reverse else 27}m")
+ if strikethrough is not None:
+ bits.append(f"\033[{9 if strikethrough else 29}m")
+ bits.append(text)
+ if reset:
+ bits.append(_ansi_reset_all)
+ return "".join(bits)
+
+
+def unstyle(text: str) -> str:
+ """Removes ANSI styling information from a string. Usually it's not
+ necessary to use this function as Click's echo function will
+ automatically remove styling if necessary.
+
+ .. versionadded:: 2.0
+
+ :param text: the text to remove style information from.
+ """
+ return strip_ansi(text)
+
+
+def secho(
+ message: t.Any | None = None,
+ file: t.IO[t.AnyStr] | None = None,
+ nl: bool = True,
+ err: bool = False,
+ color: bool | None = None,
+ **styles: t.Any,
+) -> None:
+ """This function combines :func:`echo` and :func:`style` into one
+ call. As such the following two calls are the same::
+
+ click.secho('Hello World!', fg='green')
+ click.echo(click.style('Hello World!', fg='green'))
+
+ All keyword arguments are forwarded to the underlying functions
+ depending on which one they go with.
+
+ Non-string types will be converted to :class:`str`. However,
+ :class:`bytes` are passed directly to :meth:`echo` without applying
+ style. If you want to style bytes that represent text, call
+ :meth:`bytes.decode` first.
+
+ .. versionchanged:: 8.0
+ A non-string ``message`` is converted to a string. Bytes are
+ passed through without style applied.
+
+ .. versionadded:: 2.0
+ """
+ if message is not None and not isinstance(message, (bytes, bytearray)):
+ message = style(message, **styles)
+
+ return echo(message, file=file, nl=nl, err=err, color=color)
+
+
+@t.overload
+def edit(
+ text: bytes | bytearray,
+ editor: str | None = None,
+ env: cabc.Mapping[str, str] | None = None,
+ require_save: bool = False,
+ extension: str = ".txt",
+) -> bytes | None: ...
+
+
+@t.overload
+def edit(
+ text: str,
+ editor: str | None = None,
+ env: cabc.Mapping[str, str] | None = None,
+ require_save: bool = True,
+ extension: str = ".txt",
+) -> str | None: ...
+
+
+@t.overload
+def edit(
+ text: None = None,
+ editor: str | None = None,
+ env: cabc.Mapping[str, str] | None = None,
+ require_save: bool = True,
+ extension: str = ".txt",
+ filename: str | cabc.Iterable[str] | None = None,
+) -> None: ...
+
+
+def edit(
+ text: str | bytes | bytearray | None = None,
+ editor: str | None = None,
+ env: cabc.Mapping[str, str] | None = None,
+ require_save: bool = True,
+ extension: str = ".txt",
+ filename: str | cabc.Iterable[str] | None = None,
+) -> str | bytes | bytearray | None:
+ r"""Edits the given text in the defined editor. If an editor is given
+ (should be the full path to the executable but the regular operating
+ system search path is used for finding the executable) it overrides
+ the detected editor. Optionally, some environment variables can be
+ used. If the editor is closed without changes, `None` is returned. In
+ case a file is edited directly the return value is always `None` and
+ `require_save` and `extension` are ignored.
+
+ If the editor cannot be opened a :exc:`UsageError` is raised.
+
+ Note for Windows: to simplify cross-platform usage, the newlines are
+ automatically converted from POSIX to Windows and vice versa. As such,
+ the message here will have ``\n`` as newline markers.
+
+ :param text: the text to edit.
+ :param editor: optionally the editor to use. Defaults to automatic
+ detection.
+ :param env: environment variables to forward to the editor.
+ :param require_save: if this is true, then not saving in the editor
+ will make the return value become `None`.
+ :param extension: the extension to tell the editor about. This defaults
+ to `.txt` but changing this might change syntax
+ highlighting.
+ :param filename: if provided it will edit this file instead of the
+ provided text contents. It will not use a temporary
+ file as an indirection in that case. If the editor supports
+ editing multiple files at once, a sequence of files may be
+ passed as well. Invoke `click.file` once per file instead
+ if multiple files cannot be managed at once or editing the
+ files serially is desired.
+
+ .. versionchanged:: 8.2.0
+ ``filename`` now accepts any ``Iterable[str]`` in addition to a ``str``
+ if the ``editor`` supports editing multiple files at once.
+
+ """
+ from ._termui_impl import Editor
+
+ ed = Editor(editor=editor, env=env, require_save=require_save, extension=extension)
+
+ if filename is None:
+ return ed.edit(text)
+
+ if isinstance(filename, str):
+ filename = (filename,)
+
+ ed.edit_files(filenames=filename)
+ return None
+
+
+def launch(url: str, wait: bool = False, locate: bool = False) -> int:
+ """This function launches the given URL (or filename) in the default
+ viewer application for this file type. If this is an executable, it
+ might launch the executable in a new session. The return value is
+ the exit code of the launched application. Usually, ``0`` indicates
+ success.
+
+ Examples::
+
+ click.launch('https://click.palletsprojects.com/')
+ click.launch('/my/downloaded/file', locate=True)
+
+ .. versionadded:: 2.0
+
+ :param url: URL or filename of the thing to launch.
+ :param wait: Wait for the program to exit before returning. This
+ only works if the launched program blocks. In particular,
+ ``xdg-open`` on Linux does not block.
+ :param locate: if this is set to `True` then instead of launching the
+ application associated with the URL it will attempt to
+ launch a file manager with the file located. This
+ might have weird effects if the URL does not point to
+ the filesystem.
+ """
+ from ._termui_impl import open_url
+
+ return open_url(url, wait=wait, locate=locate)
+
+
+# If this is provided, getchar() calls into this instead. This is used
+# for unittesting purposes.
+_getchar: t.Callable[[bool], str] | None = None
+
+
+def getchar(echo: bool = False) -> str:
+ """Fetches a single character from the terminal and returns it. This
+ will always return a unicode character and under certain rare
+ circumstances this might return more than one character. The
+ situations which more than one character is returned is when for
+ whatever reason multiple characters end up in the terminal buffer or
+ standard input was not actually a terminal.
+
+ Note that this will always read from the terminal, even if something
+ is piped into the standard input.
+
+ Note for Windows: in rare cases when typing non-ASCII characters, this
+ function might wait for a second character and then return both at once.
+ This is because certain Unicode characters look like special-key markers.
+
+ .. versionadded:: 2.0
+
+ :param echo: if set to `True`, the character read will also show up on
+ the terminal. The default is to not show it.
+ """
+ global _getchar
+
+ if _getchar is None:
+ from ._termui_impl import getchar as f
+
+ _getchar = f
+
+ return _getchar(echo)
+
+
+def raw_terminal() -> AbstractContextManager[int]:
+ from ._termui_impl import raw_terminal as f
+
+ return f()
+
+
+def pause(info: str | None = None, err: bool = False) -> None:
+ """This command stops execution and waits for the user to press any
+ key to continue. This is similar to the Windows batch "pause"
+ command. If the program is not run through a terminal, this command
+ will instead do nothing.
+
+ .. versionadded:: 2.0
+
+ .. versionadded:: 4.0
+ Added the `err` parameter.
+
+ :param info: The message to print before pausing. Defaults to
+ ``"Press any key to continue..."``.
+ :param err: if set to message goes to ``stderr`` instead of
+ ``stdout``, the same as with echo.
+ """
+ if not isatty(sys.stdin) or not isatty(sys.stdout):
+ return
+
+ if info is None:
+ info = _("Press any key to continue...")
+
+ try:
+ if info:
+ echo(info, nl=False, err=err)
+ try:
+ getchar()
+ except (KeyboardInterrupt, EOFError):
+ pass
+ finally:
+ if info:
+ echo(err=err)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/testing.py b/stripe_config/.venv/lib/python3.12/site-packages/click/testing.py
new file mode 100644
index 0000000..f6f60b8
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/testing.py
@@ -0,0 +1,577 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import contextlib
+import io
+import os
+import shlex
+import sys
+import tempfile
+import typing as t
+from types import TracebackType
+
+from . import _compat
+from . import formatting
+from . import termui
+from . import utils
+from ._compat import _find_binary_reader
+
+if t.TYPE_CHECKING:
+ from _typeshed import ReadableBuffer
+
+ from .core import Command
+
+
+class EchoingStdin:
+ def __init__(self, input: t.BinaryIO, output: t.BinaryIO) -> None:
+ self._input = input
+ self._output = output
+ self._paused = False
+
+ def __getattr__(self, x: str) -> t.Any:
+ return getattr(self._input, x)
+
+ def _echo(self, rv: bytes) -> bytes:
+ if not self._paused:
+ self._output.write(rv)
+
+ return rv
+
+ def read(self, n: int = -1) -> bytes:
+ return self._echo(self._input.read(n))
+
+ def read1(self, n: int = -1) -> bytes:
+ return self._echo(self._input.read1(n)) # type: ignore
+
+ def readline(self, n: int = -1) -> bytes:
+ return self._echo(self._input.readline(n))
+
+ def readlines(self) -> list[bytes]:
+ return [self._echo(x) for x in self._input.readlines()]
+
+ def __iter__(self) -> cabc.Iterator[bytes]:
+ return iter(self._echo(x) for x in self._input)
+
+ def __repr__(self) -> str:
+ return repr(self._input)
+
+
+@contextlib.contextmanager
+def _pause_echo(stream: EchoingStdin | None) -> cabc.Iterator[None]:
+ if stream is None:
+ yield
+ else:
+ stream._paused = True
+ yield
+ stream._paused = False
+
+
+class BytesIOCopy(io.BytesIO):
+ """Patch ``io.BytesIO`` to let the written stream be copied to another.
+
+ .. versionadded:: 8.2
+ """
+
+ def __init__(self, copy_to: io.BytesIO) -> None:
+ super().__init__()
+ self.copy_to = copy_to
+
+ def flush(self) -> None:
+ super().flush()
+ self.copy_to.flush()
+
+ def write(self, b: ReadableBuffer) -> int:
+ self.copy_to.write(b)
+ return super().write(b)
+
+
+class StreamMixer:
+ """Mixes `` and `` streams.
+
+ The result is available in the ``output`` attribute.
+
+ .. versionadded:: 8.2
+ """
+
+ def __init__(self) -> None:
+ self.output: io.BytesIO = io.BytesIO()
+ self.stdout: io.BytesIO = BytesIOCopy(copy_to=self.output)
+ self.stderr: io.BytesIO = BytesIOCopy(copy_to=self.output)
+
+ def __del__(self) -> None:
+ """
+ Guarantee that embedded file-like objects are closed in a
+ predictable order, protecting against races between
+ self.output being closed and other streams being flushed on close
+
+ .. versionadded:: 8.2.2
+ """
+ self.stderr.close()
+ self.stdout.close()
+ self.output.close()
+
+
+class _NamedTextIOWrapper(io.TextIOWrapper):
+ def __init__(
+ self, buffer: t.BinaryIO, name: str, mode: str, **kwargs: t.Any
+ ) -> None:
+ super().__init__(buffer, **kwargs)
+ self._name = name
+ self._mode = mode
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+ @property
+ def mode(self) -> str:
+ return self._mode
+
+
+def make_input_stream(
+ input: str | bytes | t.IO[t.Any] | None, charset: str
+) -> t.BinaryIO:
+ # Is already an input stream.
+ if hasattr(input, "read"):
+ rv = _find_binary_reader(t.cast("t.IO[t.Any]", input))
+
+ if rv is not None:
+ return rv
+
+ raise TypeError("Could not find binary reader for input stream.")
+
+ if input is None:
+ input = b""
+ elif isinstance(input, str):
+ input = input.encode(charset)
+
+ return io.BytesIO(input)
+
+
+class Result:
+ """Holds the captured result of an invoked CLI script.
+
+ :param runner: The runner that created the result
+ :param stdout_bytes: The standard output as bytes.
+ :param stderr_bytes: The standard error as bytes.
+ :param output_bytes: A mix of ``stdout_bytes`` and ``stderr_bytes``, as the
+ user would see it in its terminal.
+ :param return_value: The value returned from the invoked command.
+ :param exit_code: The exit code as integer.
+ :param exception: The exception that happened if one did.
+ :param exc_info: Exception information (exception type, exception instance,
+ traceback type).
+
+ .. versionchanged:: 8.2
+ ``stderr_bytes`` no longer optional, ``output_bytes`` introduced and
+ ``mix_stderr`` has been removed.
+
+ .. versionadded:: 8.0
+ Added ``return_value``.
+ """
+
+ def __init__(
+ self,
+ runner: CliRunner,
+ stdout_bytes: bytes,
+ stderr_bytes: bytes,
+ output_bytes: bytes,
+ return_value: t.Any,
+ exit_code: int,
+ exception: BaseException | None,
+ exc_info: tuple[type[BaseException], BaseException, TracebackType]
+ | None = None,
+ ):
+ self.runner = runner
+ self.stdout_bytes = stdout_bytes
+ self.stderr_bytes = stderr_bytes
+ self.output_bytes = output_bytes
+ self.return_value = return_value
+ self.exit_code = exit_code
+ self.exception = exception
+ self.exc_info = exc_info
+
+ @property
+ def output(self) -> str:
+ """The terminal output as unicode string, as the user would see it.
+
+ .. versionchanged:: 8.2
+ No longer a proxy for ``self.stdout``. Now has its own independent stream
+ that is mixing `` and ``, in the order they were written.
+ """
+ return self.output_bytes.decode(self.runner.charset, "replace").replace(
+ "\r\n", "\n"
+ )
+
+ @property
+ def stdout(self) -> str:
+ """The standard output as unicode string."""
+ return self.stdout_bytes.decode(self.runner.charset, "replace").replace(
+ "\r\n", "\n"
+ )
+
+ @property
+ def stderr(self) -> str:
+ """The standard error as unicode string.
+
+ .. versionchanged:: 8.2
+ No longer raise an exception, always returns the `` string.
+ """
+ return self.stderr_bytes.decode(self.runner.charset, "replace").replace(
+ "\r\n", "\n"
+ )
+
+ def __repr__(self) -> str:
+ exc_str = repr(self.exception) if self.exception else "okay"
+ return f"<{type(self).__name__} {exc_str}>"
+
+
+class CliRunner:
+ """The CLI runner provides functionality to invoke a Click command line
+ script for unittesting purposes in a isolated environment. This only
+ works in single-threaded systems without any concurrency as it changes the
+ global interpreter state.
+
+ :param charset: the character set for the input and output data.
+ :param env: a dictionary with environment variables for overriding.
+ :param echo_stdin: if this is set to `True`, then reading from `` writes
+ to ``. This is useful for showing examples in
+ some circumstances. Note that regular prompts
+ will automatically echo the input.
+ :param catch_exceptions: Whether to catch any exceptions other than
+ ``SystemExit`` when running :meth:`~CliRunner.invoke`.
+
+ .. versionchanged:: 8.2
+ Added the ``catch_exceptions`` parameter.
+
+ .. versionchanged:: 8.2
+ ``mix_stderr`` parameter has been removed.
+ """
+
+ def __init__(
+ self,
+ charset: str = "utf-8",
+ env: cabc.Mapping[str, str | None] | None = None,
+ echo_stdin: bool = False,
+ catch_exceptions: bool = True,
+ ) -> None:
+ self.charset = charset
+ self.env: cabc.Mapping[str, str | None] = env or {}
+ self.echo_stdin = echo_stdin
+ self.catch_exceptions = catch_exceptions
+
+ def get_default_prog_name(self, cli: Command) -> str:
+ """Given a command object it will return the default program name
+ for it. The default is the `name` attribute or ``"root"`` if not
+ set.
+ """
+ return cli.name or "root"
+
+ def make_env(
+ self, overrides: cabc.Mapping[str, str | None] | None = None
+ ) -> cabc.Mapping[str, str | None]:
+ """Returns the environment overrides for invoking a script."""
+ rv = dict(self.env)
+ if overrides:
+ rv.update(overrides)
+ return rv
+
+ @contextlib.contextmanager
+ def isolation(
+ self,
+ input: str | bytes | t.IO[t.Any] | None = None,
+ env: cabc.Mapping[str, str | None] | None = None,
+ color: bool = False,
+ ) -> cabc.Iterator[tuple[io.BytesIO, io.BytesIO, io.BytesIO]]:
+ """A context manager that sets up the isolation for invoking of a
+ command line tool. This sets up `` with the given input data
+ and `os.environ` with the overrides from the given dictionary.
+ This also rebinds some internals in Click to be mocked (like the
+ prompt functionality).
+
+ This is automatically done in the :meth:`invoke` method.
+
+ :param input: the input stream to put into `sys.stdin`.
+ :param env: the environment overrides as dictionary.
+ :param color: whether the output should contain color codes. The
+ application can still override this explicitly.
+
+ .. versionadded:: 8.2
+ An additional output stream is returned, which is a mix of
+ `` and `` streams.
+
+ .. versionchanged:: 8.2
+ Always returns the `` stream.
+
+ .. versionchanged:: 8.0
+ `` is opened with ``errors="backslashreplace"``
+ instead of the default ``"strict"``.
+
+ .. versionchanged:: 4.0
+ Added the ``color`` parameter.
+ """
+ bytes_input = make_input_stream(input, self.charset)
+ echo_input = None
+
+ old_stdin = sys.stdin
+ old_stdout = sys.stdout
+ old_stderr = sys.stderr
+ old_forced_width = formatting.FORCED_WIDTH
+ formatting.FORCED_WIDTH = 80
+
+ env = self.make_env(env)
+
+ stream_mixer = StreamMixer()
+
+ if self.echo_stdin:
+ bytes_input = echo_input = t.cast(
+ t.BinaryIO, EchoingStdin(bytes_input, stream_mixer.stdout)
+ )
+
+ sys.stdin = text_input = _NamedTextIOWrapper(
+ bytes_input, encoding=self.charset, name="", mode="r"
+ )
+
+ if self.echo_stdin:
+ # Force unbuffered reads, otherwise TextIOWrapper reads a
+ # large chunk which is echoed early.
+ text_input._CHUNK_SIZE = 1 # type: ignore
+
+ sys.stdout = _NamedTextIOWrapper(
+ stream_mixer.stdout, encoding=self.charset, name="", mode="w"
+ )
+
+ sys.stderr = _NamedTextIOWrapper(
+ stream_mixer.stderr,
+ encoding=self.charset,
+ name="",
+ mode="w",
+ errors="backslashreplace",
+ )
+
+ @_pause_echo(echo_input) # type: ignore
+ def visible_input(prompt: str | None = None) -> str:
+ sys.stdout.write(prompt or "")
+ try:
+ val = next(text_input).rstrip("\r\n")
+ except StopIteration as e:
+ raise EOFError() from e
+ sys.stdout.write(f"{val}\n")
+ sys.stdout.flush()
+ return val
+
+ @_pause_echo(echo_input) # type: ignore
+ def hidden_input(prompt: str | None = None) -> str:
+ sys.stdout.write(f"{prompt or ''}\n")
+ sys.stdout.flush()
+ try:
+ return next(text_input).rstrip("\r\n")
+ except StopIteration as e:
+ raise EOFError() from e
+
+ @_pause_echo(echo_input) # type: ignore
+ def _getchar(echo: bool) -> str:
+ char = sys.stdin.read(1)
+
+ if echo:
+ sys.stdout.write(char)
+
+ sys.stdout.flush()
+ return char
+
+ default_color = color
+
+ def should_strip_ansi(
+ stream: t.IO[t.Any] | None = None, color: bool | None = None
+ ) -> bool:
+ if color is None:
+ return not default_color
+ return not color
+
+ old_visible_prompt_func = termui.visible_prompt_func
+ old_hidden_prompt_func = termui.hidden_prompt_func
+ old__getchar_func = termui._getchar
+ old_should_strip_ansi = utils.should_strip_ansi # type: ignore
+ old__compat_should_strip_ansi = _compat.should_strip_ansi
+ termui.visible_prompt_func = visible_input
+ termui.hidden_prompt_func = hidden_input
+ termui._getchar = _getchar
+ utils.should_strip_ansi = should_strip_ansi # type: ignore
+ _compat.should_strip_ansi = should_strip_ansi
+
+ old_env = {}
+ try:
+ for key, value in env.items():
+ old_env[key] = os.environ.get(key)
+ if value is None:
+ try:
+ del os.environ[key]
+ except Exception:
+ pass
+ else:
+ os.environ[key] = value
+ yield (stream_mixer.stdout, stream_mixer.stderr, stream_mixer.output)
+ finally:
+ for key, value in old_env.items():
+ if value is None:
+ try:
+ del os.environ[key]
+ except Exception:
+ pass
+ else:
+ os.environ[key] = value
+ sys.stdout = old_stdout
+ sys.stderr = old_stderr
+ sys.stdin = old_stdin
+ termui.visible_prompt_func = old_visible_prompt_func
+ termui.hidden_prompt_func = old_hidden_prompt_func
+ termui._getchar = old__getchar_func
+ utils.should_strip_ansi = old_should_strip_ansi # type: ignore
+ _compat.should_strip_ansi = old__compat_should_strip_ansi
+ formatting.FORCED_WIDTH = old_forced_width
+
+ def invoke(
+ self,
+ cli: Command,
+ args: str | cabc.Sequence[str] | None = None,
+ input: str | bytes | t.IO[t.Any] | None = None,
+ env: cabc.Mapping[str, str | None] | None = None,
+ catch_exceptions: bool | None = None,
+ color: bool = False,
+ **extra: t.Any,
+ ) -> Result:
+ """Invokes a command in an isolated environment. The arguments are
+ forwarded directly to the command line script, the `extra` keyword
+ arguments are passed to the :meth:`~clickpkg.Command.main` function of
+ the command.
+
+ This returns a :class:`Result` object.
+
+ :param cli: the command to invoke
+ :param args: the arguments to invoke. It may be given as an iterable
+ or a string. When given as string it will be interpreted
+ as a Unix shell command. More details at
+ :func:`shlex.split`.
+ :param input: the input data for `sys.stdin`.
+ :param env: the environment overrides.
+ :param catch_exceptions: Whether to catch any other exceptions than
+ ``SystemExit``. If :data:`None`, the value
+ from :class:`CliRunner` is used.
+ :param extra: the keyword arguments to pass to :meth:`main`.
+ :param color: whether the output should contain color codes. The
+ application can still override this explicitly.
+
+ .. versionadded:: 8.2
+ The result object has the ``output_bytes`` attribute with
+ the mix of ``stdout_bytes`` and ``stderr_bytes``, as the user would
+ see it in its terminal.
+
+ .. versionchanged:: 8.2
+ The result object always returns the ``stderr_bytes`` stream.
+
+ .. versionchanged:: 8.0
+ The result object has the ``return_value`` attribute with
+ the value returned from the invoked command.
+
+ .. versionchanged:: 4.0
+ Added the ``color`` parameter.
+
+ .. versionchanged:: 3.0
+ Added the ``catch_exceptions`` parameter.
+
+ .. versionchanged:: 3.0
+ The result object has the ``exc_info`` attribute with the
+ traceback if available.
+ """
+ exc_info = None
+ if catch_exceptions is None:
+ catch_exceptions = self.catch_exceptions
+
+ with self.isolation(input=input, env=env, color=color) as outstreams:
+ return_value = None
+ exception: BaseException | None = None
+ exit_code = 0
+
+ if isinstance(args, str):
+ args = shlex.split(args)
+
+ try:
+ prog_name = extra.pop("prog_name")
+ except KeyError:
+ prog_name = self.get_default_prog_name(cli)
+
+ try:
+ return_value = cli.main(args=args or (), prog_name=prog_name, **extra)
+ except SystemExit as e:
+ exc_info = sys.exc_info()
+ e_code = t.cast("int | t.Any | None", e.code)
+
+ if e_code is None:
+ e_code = 0
+
+ if e_code != 0:
+ exception = e
+
+ if not isinstance(e_code, int):
+ sys.stdout.write(str(e_code))
+ sys.stdout.write("\n")
+ e_code = 1
+
+ exit_code = e_code
+
+ except Exception as e:
+ if not catch_exceptions:
+ raise
+ exception = e
+ exit_code = 1
+ exc_info = sys.exc_info()
+ finally:
+ sys.stdout.flush()
+ sys.stderr.flush()
+ stdout = outstreams[0].getvalue()
+ stderr = outstreams[1].getvalue()
+ output = outstreams[2].getvalue()
+
+ return Result(
+ runner=self,
+ stdout_bytes=stdout,
+ stderr_bytes=stderr,
+ output_bytes=output,
+ return_value=return_value,
+ exit_code=exit_code,
+ exception=exception,
+ exc_info=exc_info, # type: ignore
+ )
+
+ @contextlib.contextmanager
+ def isolated_filesystem(
+ self, temp_dir: str | os.PathLike[str] | None = None
+ ) -> cabc.Iterator[str]:
+ """A context manager that creates a temporary directory and
+ changes the current working directory to it. This isolates tests
+ that affect the contents of the CWD to prevent them from
+ interfering with each other.
+
+ :param temp_dir: Create the temporary directory under this
+ directory. If given, the created directory is not removed
+ when exiting.
+
+ .. versionchanged:: 8.0
+ Added the ``temp_dir`` parameter.
+ """
+ cwd = os.getcwd()
+ dt = tempfile.mkdtemp(dir=temp_dir)
+ os.chdir(dt)
+
+ try:
+ yield dt
+ finally:
+ os.chdir(cwd)
+
+ if temp_dir is None:
+ import shutil
+
+ try:
+ shutil.rmtree(dt)
+ except OSError:
+ pass
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/types.py b/stripe_config/.venv/lib/python3.12/site-packages/click/types.py
new file mode 100644
index 0000000..e71c1c2
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/types.py
@@ -0,0 +1,1209 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import enum
+import os
+import stat
+import sys
+import typing as t
+from datetime import datetime
+from gettext import gettext as _
+from gettext import ngettext
+
+from ._compat import _get_argv_encoding
+from ._compat import open_stream
+from .exceptions import BadParameter
+from .utils import format_filename
+from .utils import LazyFile
+from .utils import safecall
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .core import Context
+ from .core import Parameter
+ from .shell_completion import CompletionItem
+
+ParamTypeValue = t.TypeVar("ParamTypeValue")
+
+
+class ParamType:
+ """Represents the type of a parameter. Validates and converts values
+ from the command line or Python into the correct type.
+
+ To implement a custom type, subclass and implement at least the
+ following:
+
+ - The :attr:`name` class attribute must be set.
+ - Calling an instance of the type with ``None`` must return
+ ``None``. This is already implemented by default.
+ - :meth:`convert` must convert string values to the correct type.
+ - :meth:`convert` must accept values that are already the correct
+ type.
+ - It must be able to convert a value if the ``ctx`` and ``param``
+ arguments are ``None``. This can occur when converting prompt
+ input.
+ """
+
+ is_composite: t.ClassVar[bool] = False
+ arity: t.ClassVar[int] = 1
+
+ #: the descriptive name of this type
+ name: str
+
+ #: if a list of this type is expected and the value is pulled from a
+ #: string environment variable, this is what splits it up. `None`
+ #: means any whitespace. For all parameters the general rule is that
+ #: whitespace splits them up. The exception are paths and files which
+ #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on
+ #: Windows).
+ envvar_list_splitter: t.ClassVar[str | None] = None
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ """Gather information that could be useful for a tool generating
+ user-facing documentation.
+
+ Use :meth:`click.Context.to_info_dict` to traverse the entire
+ CLI structure.
+
+ .. versionadded:: 8.0
+ """
+ # The class name without the "ParamType" suffix.
+ param_type = type(self).__name__.partition("ParamType")[0]
+ param_type = param_type.partition("ParameterType")[0]
+
+ # Custom subclasses might not remember to set a name.
+ if hasattr(self, "name"):
+ name = self.name
+ else:
+ name = param_type
+
+ return {"param_type": param_type, "name": name}
+
+ def __call__(
+ self,
+ value: t.Any,
+ param: Parameter | None = None,
+ ctx: Context | None = None,
+ ) -> t.Any:
+ if value is not None:
+ return self.convert(value, param, ctx)
+
+ def get_metavar(self, param: Parameter, ctx: Context) -> str | None:
+ """Returns the metavar default for this param if it provides one."""
+
+ def get_missing_message(self, param: Parameter, ctx: Context | None) -> str | None:
+ """Optionally might return extra information about a missing
+ parameter.
+
+ .. versionadded:: 2.0
+ """
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ """Convert the value to the correct type. This is not called if
+ the value is ``None`` (the missing value).
+
+ This must accept string values from the command line, as well as
+ values that are already the correct type. It may also convert
+ other compatible types.
+
+ The ``param`` and ``ctx`` arguments may be ``None`` in certain
+ situations, such as when converting prompt input.
+
+ If the value cannot be converted, call :meth:`fail` with a
+ descriptive message.
+
+ :param value: The value to convert.
+ :param param: The parameter that is using this type to convert
+ its value. May be ``None``.
+ :param ctx: The current context that arrived at this value. May
+ be ``None``.
+ """
+ return value
+
+ def split_envvar_value(self, rv: str) -> cabc.Sequence[str]:
+ """Given a value from an environment variable this splits it up
+ into small chunks depending on the defined envvar list splitter.
+
+ If the splitter is set to `None`, which means that whitespace splits,
+ then leading and trailing whitespace is ignored. Otherwise, leading
+ and trailing splitters usually lead to empty items being included.
+ """
+ return (rv or "").split(self.envvar_list_splitter)
+
+ def fail(
+ self,
+ message: str,
+ param: Parameter | None = None,
+ ctx: Context | None = None,
+ ) -> t.NoReturn:
+ """Helper method to fail with an invalid value message."""
+ raise BadParameter(message, ctx=ctx, param=param)
+
+ def shell_complete(
+ self, ctx: Context, param: Parameter, incomplete: str
+ ) -> list[CompletionItem]:
+ """Return a list of
+ :class:`~click.shell_completion.CompletionItem` objects for the
+ incomplete value. Most types do not provide completions, but
+ some do, and this allows custom types to provide custom
+ completions as well.
+
+ :param ctx: Invocation context for this command.
+ :param param: The parameter that is requesting completion.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ return []
+
+
+class CompositeParamType(ParamType):
+ is_composite = True
+
+ @property
+ def arity(self) -> int: # type: ignore
+ raise NotImplementedError()
+
+
+class FuncParamType(ParamType):
+ def __init__(self, func: t.Callable[[t.Any], t.Any]) -> None:
+ self.name: str = func.__name__
+ self.func = func
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict["func"] = self.func
+ return info_dict
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ try:
+ return self.func(value)
+ except ValueError:
+ try:
+ value = str(value)
+ except UnicodeError:
+ value = value.decode("utf-8", "replace")
+
+ self.fail(value, param, ctx)
+
+
+class UnprocessedParamType(ParamType):
+ name = "text"
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ return value
+
+ def __repr__(self) -> str:
+ return "UNPROCESSED"
+
+
+class StringParamType(ParamType):
+ name = "text"
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ if isinstance(value, bytes):
+ enc = _get_argv_encoding()
+ try:
+ value = value.decode(enc)
+ except UnicodeError:
+ fs_enc = sys.getfilesystemencoding()
+ if fs_enc != enc:
+ try:
+ value = value.decode(fs_enc)
+ except UnicodeError:
+ value = value.decode("utf-8", "replace")
+ else:
+ value = value.decode("utf-8", "replace")
+ return value
+ return str(value)
+
+ def __repr__(self) -> str:
+ return "STRING"
+
+
+class Choice(ParamType, t.Generic[ParamTypeValue]):
+ """The choice type allows a value to be checked against a fixed set
+ of supported values.
+
+ You may pass any iterable value which will be converted to a tuple
+ and thus will only be iterated once.
+
+ The resulting value will always be one of the originally passed choices.
+ See :meth:`normalize_choice` for more info on the mapping of strings
+ to choices. See :ref:`choice-opts` for an example.
+
+ :param case_sensitive: Set to false to make choices case
+ insensitive. Defaults to true.
+
+ .. versionchanged:: 8.2.0
+ Non-``str`` ``choices`` are now supported. It can additionally be any
+ iterable. Before you were not recommended to pass anything but a list or
+ tuple.
+
+ .. versionadded:: 8.2.0
+ Choice normalization can be overridden via :meth:`normalize_choice`.
+ """
+
+ name = "choice"
+
+ def __init__(
+ self, choices: cabc.Iterable[ParamTypeValue], case_sensitive: bool = True
+ ) -> None:
+ self.choices: cabc.Sequence[ParamTypeValue] = tuple(choices)
+ self.case_sensitive = case_sensitive
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict["choices"] = self.choices
+ info_dict["case_sensitive"] = self.case_sensitive
+ return info_dict
+
+ def _normalized_mapping(
+ self, ctx: Context | None = None
+ ) -> cabc.Mapping[ParamTypeValue, str]:
+ """
+ Returns mapping where keys are the original choices and the values are
+ the normalized values that are accepted via the command line.
+
+ This is a simple wrapper around :meth:`normalize_choice`, use that
+ instead which is supported.
+ """
+ return {
+ choice: self.normalize_choice(
+ choice=choice,
+ ctx=ctx,
+ )
+ for choice in self.choices
+ }
+
+ def normalize_choice(self, choice: ParamTypeValue, ctx: Context | None) -> str:
+ """
+ Normalize a choice value, used to map a passed string to a choice.
+ Each choice must have a unique normalized value.
+
+ By default uses :meth:`Context.token_normalize_func` and if not case
+ sensitive, convert it to a casefolded value.
+
+ .. versionadded:: 8.2.0
+ """
+ normed_value = choice.name if isinstance(choice, enum.Enum) else str(choice)
+
+ if ctx is not None and ctx.token_normalize_func is not None:
+ normed_value = ctx.token_normalize_func(normed_value)
+
+ if not self.case_sensitive:
+ normed_value = normed_value.casefold()
+
+ return normed_value
+
+ def get_metavar(self, param: Parameter, ctx: Context) -> str | None:
+ if param.param_type_name == "option" and not param.show_choices: # type: ignore
+ choice_metavars = [
+ convert_type(type(choice)).name.upper() for choice in self.choices
+ ]
+ choices_str = "|".join([*dict.fromkeys(choice_metavars)])
+ else:
+ choices_str = "|".join(
+ [str(i) for i in self._normalized_mapping(ctx=ctx).values()]
+ )
+
+ # Use curly braces to indicate a required argument.
+ if param.required and param.param_type_name == "argument":
+ return f"{{{choices_str}}}"
+
+ # Use square braces to indicate an option or optional argument.
+ return f"[{choices_str}]"
+
+ def get_missing_message(self, param: Parameter, ctx: Context | None) -> str:
+ """
+ Message shown when no choice is passed.
+
+ .. versionchanged:: 8.2.0 Added ``ctx`` argument.
+ """
+ return _("Choose from:\n\t{choices}").format(
+ choices=",\n\t".join(self._normalized_mapping(ctx=ctx).values())
+ )
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> ParamTypeValue:
+ """
+ For a given value from the parser, normalize it and find its
+ matching normalized value in the list of choices. Then return the
+ matched "original" choice.
+ """
+ normed_value = self.normalize_choice(choice=value, ctx=ctx)
+ normalized_mapping = self._normalized_mapping(ctx=ctx)
+
+ try:
+ return next(
+ original
+ for original, normalized in normalized_mapping.items()
+ if normalized == normed_value
+ )
+ except StopIteration:
+ self.fail(
+ self.get_invalid_choice_message(value=value, ctx=ctx),
+ param=param,
+ ctx=ctx,
+ )
+
+ def get_invalid_choice_message(self, value: t.Any, ctx: Context | None) -> str:
+ """Get the error message when the given choice is invalid.
+
+ :param value: The invalid value.
+
+ .. versionadded:: 8.2
+ """
+ choices_str = ", ".join(map(repr, self._normalized_mapping(ctx=ctx).values()))
+ return ngettext(
+ "{value!r} is not {choice}.",
+ "{value!r} is not one of {choices}.",
+ len(self.choices),
+ ).format(value=value, choice=choices_str, choices=choices_str)
+
+ def __repr__(self) -> str:
+ return f"Choice({list(self.choices)})"
+
+ def shell_complete(
+ self, ctx: Context, param: Parameter, incomplete: str
+ ) -> list[CompletionItem]:
+ """Complete choices that start with the incomplete value.
+
+ :param ctx: Invocation context for this command.
+ :param param: The parameter that is requesting completion.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ from click.shell_completion import CompletionItem
+
+ str_choices = map(str, self.choices)
+
+ if self.case_sensitive:
+ matched = (c for c in str_choices if c.startswith(incomplete))
+ else:
+ incomplete = incomplete.lower()
+ matched = (c for c in str_choices if c.lower().startswith(incomplete))
+
+ return [CompletionItem(c) for c in matched]
+
+
+class DateTime(ParamType):
+ """The DateTime type converts date strings into `datetime` objects.
+
+ The format strings which are checked are configurable, but default to some
+ common (non-timezone aware) ISO 8601 formats.
+
+ When specifying *DateTime* formats, you should only pass a list or a tuple.
+ Other iterables, like generators, may lead to surprising results.
+
+ The format strings are processed using ``datetime.strptime``, and this
+ consequently defines the format strings which are allowed.
+
+ Parsing is tried using each format, in order, and the first format which
+ parses successfully is used.
+
+ :param formats: A list or tuple of date format strings, in the order in
+ which they should be tried. Defaults to
+ ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``,
+ ``'%Y-%m-%d %H:%M:%S'``.
+ """
+
+ name = "datetime"
+
+ def __init__(self, formats: cabc.Sequence[str] | None = None):
+ self.formats: cabc.Sequence[str] = formats or [
+ "%Y-%m-%d",
+ "%Y-%m-%dT%H:%M:%S",
+ "%Y-%m-%d %H:%M:%S",
+ ]
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict["formats"] = self.formats
+ return info_dict
+
+ def get_metavar(self, param: Parameter, ctx: Context) -> str | None:
+ return f"[{'|'.join(self.formats)}]"
+
+ def _try_to_convert_date(self, value: t.Any, format: str) -> datetime | None:
+ try:
+ return datetime.strptime(value, format)
+ except ValueError:
+ return None
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ if isinstance(value, datetime):
+ return value
+
+ for format in self.formats:
+ converted = self._try_to_convert_date(value, format)
+
+ if converted is not None:
+ return converted
+
+ formats_str = ", ".join(map(repr, self.formats))
+ self.fail(
+ ngettext(
+ "{value!r} does not match the format {format}.",
+ "{value!r} does not match the formats {formats}.",
+ len(self.formats),
+ ).format(value=value, format=formats_str, formats=formats_str),
+ param,
+ ctx,
+ )
+
+ def __repr__(self) -> str:
+ return "DateTime"
+
+
+class _NumberParamTypeBase(ParamType):
+ _number_class: t.ClassVar[type[t.Any]]
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ try:
+ return self._number_class(value)
+ except ValueError:
+ self.fail(
+ _("{value!r} is not a valid {number_type}.").format(
+ value=value, number_type=self.name
+ ),
+ param,
+ ctx,
+ )
+
+
+class _NumberRangeBase(_NumberParamTypeBase):
+ def __init__(
+ self,
+ min: float | None = None,
+ max: float | None = None,
+ min_open: bool = False,
+ max_open: bool = False,
+ clamp: bool = False,
+ ) -> None:
+ self.min = min
+ self.max = max
+ self.min_open = min_open
+ self.max_open = max_open
+ self.clamp = clamp
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict.update(
+ min=self.min,
+ max=self.max,
+ min_open=self.min_open,
+ max_open=self.max_open,
+ clamp=self.clamp,
+ )
+ return info_dict
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ import operator
+
+ rv = super().convert(value, param, ctx)
+ lt_min: bool = self.min is not None and (
+ operator.le if self.min_open else operator.lt
+ )(rv, self.min)
+ gt_max: bool = self.max is not None and (
+ operator.ge if self.max_open else operator.gt
+ )(rv, self.max)
+
+ if self.clamp:
+ if lt_min:
+ return self._clamp(self.min, 1, self.min_open) # type: ignore
+
+ if gt_max:
+ return self._clamp(self.max, -1, self.max_open) # type: ignore
+
+ if lt_min or gt_max:
+ self.fail(
+ _("{value} is not in the range {range}.").format(
+ value=rv, range=self._describe_range()
+ ),
+ param,
+ ctx,
+ )
+
+ return rv
+
+ def _clamp(self, bound: float, dir: t.Literal[1, -1], open: bool) -> float:
+ """Find the valid value to clamp to bound in the given
+ direction.
+
+ :param bound: The boundary value.
+ :param dir: 1 or -1 indicating the direction to move.
+ :param open: If true, the range does not include the bound.
+ """
+ raise NotImplementedError
+
+ def _describe_range(self) -> str:
+ """Describe the range for use in help text."""
+ if self.min is None:
+ op = "<" if self.max_open else "<="
+ return f"x{op}{self.max}"
+
+ if self.max is None:
+ op = ">" if self.min_open else ">="
+ return f"x{op}{self.min}"
+
+ lop = "<" if self.min_open else "<="
+ rop = "<" if self.max_open else "<="
+ return f"{self.min}{lop}x{rop}{self.max}"
+
+ def __repr__(self) -> str:
+ clamp = " clamped" if self.clamp else ""
+ return f"<{type(self).__name__} {self._describe_range()}{clamp}>"
+
+
+class IntParamType(_NumberParamTypeBase):
+ name = "integer"
+ _number_class = int
+
+ def __repr__(self) -> str:
+ return "INT"
+
+
+class IntRange(_NumberRangeBase, IntParamType):
+ """Restrict an :data:`click.INT` value to a range of accepted
+ values. See :ref:`ranges`.
+
+ If ``min`` or ``max`` are not passed, any value is accepted in that
+ direction. If ``min_open`` or ``max_open`` are enabled, the
+ corresponding boundary is not included in the range.
+
+ If ``clamp`` is enabled, a value outside the range is clamped to the
+ boundary instead of failing.
+
+ .. versionchanged:: 8.0
+ Added the ``min_open`` and ``max_open`` parameters.
+ """
+
+ name = "integer range"
+
+ def _clamp( # type: ignore
+ self, bound: int, dir: t.Literal[1, -1], open: bool
+ ) -> int:
+ if not open:
+ return bound
+
+ return bound + dir
+
+
+class FloatParamType(_NumberParamTypeBase):
+ name = "float"
+ _number_class = float
+
+ def __repr__(self) -> str:
+ return "FLOAT"
+
+
+class FloatRange(_NumberRangeBase, FloatParamType):
+ """Restrict a :data:`click.FLOAT` value to a range of accepted
+ values. See :ref:`ranges`.
+
+ If ``min`` or ``max`` are not passed, any value is accepted in that
+ direction. If ``min_open`` or ``max_open`` are enabled, the
+ corresponding boundary is not included in the range.
+
+ If ``clamp`` is enabled, a value outside the range is clamped to the
+ boundary instead of failing. This is not supported if either
+ boundary is marked ``open``.
+
+ .. versionchanged:: 8.0
+ Added the ``min_open`` and ``max_open`` parameters.
+ """
+
+ name = "float range"
+
+ def __init__(
+ self,
+ min: float | None = None,
+ max: float | None = None,
+ min_open: bool = False,
+ max_open: bool = False,
+ clamp: bool = False,
+ ) -> None:
+ super().__init__(
+ min=min, max=max, min_open=min_open, max_open=max_open, clamp=clamp
+ )
+
+ if (min_open or max_open) and clamp:
+ raise TypeError("Clamping is not supported for open bounds.")
+
+ def _clamp(self, bound: float, dir: t.Literal[1, -1], open: bool) -> float:
+ if not open:
+ return bound
+
+ # Could use math.nextafter here, but clamping an
+ # open float range doesn't seem to be particularly useful. It's
+ # left up to the user to write a callback to do it if needed.
+ raise RuntimeError("Clamping is not supported for open bounds.")
+
+
+class BoolParamType(ParamType):
+ name = "boolean"
+
+ bool_states: dict[str, bool] = {
+ "1": True,
+ "0": False,
+ "yes": True,
+ "no": False,
+ "true": True,
+ "false": False,
+ "on": True,
+ "off": False,
+ "t": True,
+ "f": False,
+ "y": True,
+ "n": False,
+ # Absence of value is considered False.
+ "": False,
+ }
+ """A mapping of string values to boolean states.
+
+ Mapping is inspired by :py:attr:`configparser.ConfigParser.BOOLEAN_STATES`
+ and extends it.
+
+ .. caution::
+ String values are lower-cased, as the ``str_to_bool`` comparison function
+ below is case-insensitive.
+
+ .. warning::
+ The mapping is not exhaustive, and does not cover all possible boolean strings
+ representations. It will remains as it is to avoid endless bikeshedding.
+
+ Future work my be considered to make this mapping user-configurable from public
+ API.
+ """
+
+ @staticmethod
+ def str_to_bool(value: str | bool) -> bool | None:
+ """Convert a string to a boolean value.
+
+ If the value is already a boolean, it is returned as-is. If the value is a
+ string, it is stripped of whitespaces and lower-cased, then checked against
+ the known boolean states pre-defined in the `BoolParamType.bool_states` mapping
+ above.
+
+ Returns `None` if the value does not match any known boolean state.
+ """
+ if isinstance(value, bool):
+ return value
+ return BoolParamType.bool_states.get(value.strip().lower())
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> bool:
+ normalized = self.str_to_bool(value)
+ if normalized is None:
+ self.fail(
+ _(
+ "{value!r} is not a valid boolean. Recognized values: {states}"
+ ).format(value=value, states=", ".join(sorted(self.bool_states))),
+ param,
+ ctx,
+ )
+ return normalized
+
+ def __repr__(self) -> str:
+ return "BOOL"
+
+
+class UUIDParameterType(ParamType):
+ name = "uuid"
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ import uuid
+
+ if isinstance(value, uuid.UUID):
+ return value
+
+ value = value.strip()
+
+ try:
+ return uuid.UUID(value)
+ except ValueError:
+ self.fail(
+ _("{value!r} is not a valid UUID.").format(value=value), param, ctx
+ )
+
+ def __repr__(self) -> str:
+ return "UUID"
+
+
+class File(ParamType):
+ """Declares a parameter to be a file for reading or writing. The file
+ is automatically closed once the context tears down (after the command
+ finished working).
+
+ Files can be opened for reading or writing. The special value ``-``
+ indicates stdin or stdout depending on the mode.
+
+ By default, the file is opened for reading text data, but it can also be
+ opened in binary mode or for writing. The encoding parameter can be used
+ to force a specific encoding.
+
+ The `lazy` flag controls if the file should be opened immediately or upon
+ first IO. The default is to be non-lazy for standard input and output
+ streams as well as files opened for reading, `lazy` otherwise. When opening a
+ file lazily for reading, it is still opened temporarily for validation, but
+ will not be held open until first IO. lazy is mainly useful when opening
+ for writing to avoid creating the file until it is needed.
+
+ Files can also be opened atomically in which case all writes go into a
+ separate file in the same folder and upon completion the file will
+ be moved over to the original location. This is useful if a file
+ regularly read by other users is modified.
+
+ See :ref:`file-args` for more information.
+
+ .. versionchanged:: 2.0
+ Added the ``atomic`` parameter.
+ """
+
+ name = "filename"
+ envvar_list_splitter: t.ClassVar[str] = os.path.pathsep
+
+ def __init__(
+ self,
+ mode: str = "r",
+ encoding: str | None = None,
+ errors: str | None = "strict",
+ lazy: bool | None = None,
+ atomic: bool = False,
+ ) -> None:
+ self.mode = mode
+ self.encoding = encoding
+ self.errors = errors
+ self.lazy = lazy
+ self.atomic = atomic
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict.update(mode=self.mode, encoding=self.encoding)
+ return info_dict
+
+ def resolve_lazy_flag(self, value: str | os.PathLike[str]) -> bool:
+ if self.lazy is not None:
+ return self.lazy
+ if os.fspath(value) == "-":
+ return False
+ elif "w" in self.mode:
+ return True
+ return False
+
+ def convert(
+ self,
+ value: str | os.PathLike[str] | t.IO[t.Any],
+ param: Parameter | None,
+ ctx: Context | None,
+ ) -> t.IO[t.Any]:
+ if _is_file_like(value):
+ return value
+
+ value = t.cast("str | os.PathLike[str]", value)
+
+ try:
+ lazy = self.resolve_lazy_flag(value)
+
+ if lazy:
+ lf = LazyFile(
+ value, self.mode, self.encoding, self.errors, atomic=self.atomic
+ )
+
+ if ctx is not None:
+ ctx.call_on_close(lf.close_intelligently)
+
+ return t.cast("t.IO[t.Any]", lf)
+
+ f, should_close = open_stream(
+ value, self.mode, self.encoding, self.errors, atomic=self.atomic
+ )
+
+ # If a context is provided, we automatically close the file
+ # at the end of the context execution (or flush out). If a
+ # context does not exist, it's the caller's responsibility to
+ # properly close the file. This for instance happens when the
+ # type is used with prompts.
+ if ctx is not None:
+ if should_close:
+ ctx.call_on_close(safecall(f.close))
+ else:
+ ctx.call_on_close(safecall(f.flush))
+
+ return f
+ except OSError as e:
+ self.fail(f"'{format_filename(value)}': {e.strerror}", param, ctx)
+
+ def shell_complete(
+ self, ctx: Context, param: Parameter, incomplete: str
+ ) -> list[CompletionItem]:
+ """Return a special completion marker that tells the completion
+ system to use the shell to provide file path completions.
+
+ :param ctx: Invocation context for this command.
+ :param param: The parameter that is requesting completion.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ from click.shell_completion import CompletionItem
+
+ return [CompletionItem(incomplete, type="file")]
+
+
+def _is_file_like(value: t.Any) -> te.TypeGuard[t.IO[t.Any]]:
+ return hasattr(value, "read") or hasattr(value, "write")
+
+
+class Path(ParamType):
+ """The ``Path`` type is similar to the :class:`File` type, but
+ returns the filename instead of an open file. Various checks can be
+ enabled to validate the type of file and permissions.
+
+ :param exists: The file or directory needs to exist for the value to
+ be valid. If this is not set to ``True``, and the file does not
+ exist, then all further checks are silently skipped.
+ :param file_okay: Allow a file as a value.
+ :param dir_okay: Allow a directory as a value.
+ :param readable: if true, a readable check is performed.
+ :param writable: if true, a writable check is performed.
+ :param executable: if true, an executable check is performed.
+ :param resolve_path: Make the value absolute and resolve any
+ symlinks. A ``~`` is not expanded, as this is supposed to be
+ done by the shell only.
+ :param allow_dash: Allow a single dash as a value, which indicates
+ a standard stream (but does not open it). Use
+ :func:`~click.open_file` to handle opening this value.
+ :param path_type: Convert the incoming path value to this type. If
+ ``None``, keep Python's default, which is ``str``. Useful to
+ convert to :class:`pathlib.Path`.
+
+ .. versionchanged:: 8.1
+ Added the ``executable`` parameter.
+
+ .. versionchanged:: 8.0
+ Allow passing ``path_type=pathlib.Path``.
+
+ .. versionchanged:: 6.0
+ Added the ``allow_dash`` parameter.
+ """
+
+ envvar_list_splitter: t.ClassVar[str] = os.path.pathsep
+
+ def __init__(
+ self,
+ exists: bool = False,
+ file_okay: bool = True,
+ dir_okay: bool = True,
+ writable: bool = False,
+ readable: bool = True,
+ resolve_path: bool = False,
+ allow_dash: bool = False,
+ path_type: type[t.Any] | None = None,
+ executable: bool = False,
+ ):
+ self.exists = exists
+ self.file_okay = file_okay
+ self.dir_okay = dir_okay
+ self.readable = readable
+ self.writable = writable
+ self.executable = executable
+ self.resolve_path = resolve_path
+ self.allow_dash = allow_dash
+ self.type = path_type
+
+ if self.file_okay and not self.dir_okay:
+ self.name: str = _("file")
+ elif self.dir_okay and not self.file_okay:
+ self.name = _("directory")
+ else:
+ self.name = _("path")
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict.update(
+ exists=self.exists,
+ file_okay=self.file_okay,
+ dir_okay=self.dir_okay,
+ writable=self.writable,
+ readable=self.readable,
+ allow_dash=self.allow_dash,
+ )
+ return info_dict
+
+ def coerce_path_result(
+ self, value: str | os.PathLike[str]
+ ) -> str | bytes | os.PathLike[str]:
+ if self.type is not None and not isinstance(value, self.type):
+ if self.type is str:
+ return os.fsdecode(value)
+ elif self.type is bytes:
+ return os.fsencode(value)
+ else:
+ return t.cast("os.PathLike[str]", self.type(value))
+
+ return value
+
+ def convert(
+ self,
+ value: str | os.PathLike[str],
+ param: Parameter | None,
+ ctx: Context | None,
+ ) -> str | bytes | os.PathLike[str]:
+ rv = value
+
+ is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-")
+
+ if not is_dash:
+ if self.resolve_path:
+ rv = os.path.realpath(rv)
+
+ try:
+ st = os.stat(rv)
+ except OSError:
+ if not self.exists:
+ return self.coerce_path_result(rv)
+ self.fail(
+ _("{name} {filename!r} does not exist.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+
+ if not self.file_okay and stat.S_ISREG(st.st_mode):
+ self.fail(
+ _("{name} {filename!r} is a file.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+ if not self.dir_okay and stat.S_ISDIR(st.st_mode):
+ self.fail(
+ _("{name} {filename!r} is a directory.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+
+ if self.readable and not os.access(rv, os.R_OK):
+ self.fail(
+ _("{name} {filename!r} is not readable.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+
+ if self.writable and not os.access(rv, os.W_OK):
+ self.fail(
+ _("{name} {filename!r} is not writable.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+
+ if self.executable and not os.access(value, os.X_OK):
+ self.fail(
+ _("{name} {filename!r} is not executable.").format(
+ name=self.name.title(), filename=format_filename(value)
+ ),
+ param,
+ ctx,
+ )
+
+ return self.coerce_path_result(rv)
+
+ def shell_complete(
+ self, ctx: Context, param: Parameter, incomplete: str
+ ) -> list[CompletionItem]:
+ """Return a special completion marker that tells the completion
+ system to use the shell to provide path completions for only
+ directories or any paths.
+
+ :param ctx: Invocation context for this command.
+ :param param: The parameter that is requesting completion.
+ :param incomplete: Value being completed. May be empty.
+
+ .. versionadded:: 8.0
+ """
+ from click.shell_completion import CompletionItem
+
+ type = "dir" if self.dir_okay and not self.file_okay else "file"
+ return [CompletionItem(incomplete, type=type)]
+
+
+class Tuple(CompositeParamType):
+ """The default behavior of Click is to apply a type on a value directly.
+ This works well in most cases, except for when `nargs` is set to a fixed
+ count and different types should be used for different items. In this
+ case the :class:`Tuple` type can be used. This type can only be used
+ if `nargs` is set to a fixed number.
+
+ For more information see :ref:`tuple-type`.
+
+ This can be selected by using a Python tuple literal as a type.
+
+ :param types: a list of types that should be used for the tuple items.
+ """
+
+ def __init__(self, types: cabc.Sequence[type[t.Any] | ParamType]) -> None:
+ self.types: cabc.Sequence[ParamType] = [convert_type(ty) for ty in types]
+
+ def to_info_dict(self) -> dict[str, t.Any]:
+ info_dict = super().to_info_dict()
+ info_dict["types"] = [t.to_info_dict() for t in self.types]
+ return info_dict
+
+ @property
+ def name(self) -> str: # type: ignore
+ return f"<{' '.join(ty.name for ty in self.types)}>"
+
+ @property
+ def arity(self) -> int: # type: ignore
+ return len(self.types)
+
+ def convert(
+ self, value: t.Any, param: Parameter | None, ctx: Context | None
+ ) -> t.Any:
+ len_type = len(self.types)
+ len_value = len(value)
+
+ if len_value != len_type:
+ self.fail(
+ ngettext(
+ "{len_type} values are required, but {len_value} was given.",
+ "{len_type} values are required, but {len_value} were given.",
+ len_value,
+ ).format(len_type=len_type, len_value=len_value),
+ param=param,
+ ctx=ctx,
+ )
+
+ return tuple(
+ ty(x, param, ctx) for ty, x in zip(self.types, value, strict=False)
+ )
+
+
+def convert_type(ty: t.Any | None, default: t.Any | None = None) -> ParamType:
+ """Find the most appropriate :class:`ParamType` for the given Python
+ type. If the type isn't provided, it can be inferred from a default
+ value.
+ """
+ guessed_type = False
+
+ if ty is None and default is not None:
+ if isinstance(default, (tuple, list)):
+ # If the default is empty, ty will remain None and will
+ # return STRING.
+ if default:
+ item = default[0]
+
+ # A tuple of tuples needs to detect the inner types.
+ # Can't call convert recursively because that would
+ # incorrectly unwind the tuple to a single type.
+ if isinstance(item, (tuple, list)):
+ ty = tuple(map(type, item))
+ else:
+ ty = type(item)
+ else:
+ ty = type(default)
+
+ guessed_type = True
+
+ if isinstance(ty, tuple):
+ return Tuple(ty)
+
+ if isinstance(ty, ParamType):
+ return ty
+
+ if ty is str or ty is None:
+ return STRING
+
+ if ty is int:
+ return INT
+
+ if ty is float:
+ return FLOAT
+
+ if ty is bool:
+ return BOOL
+
+ if guessed_type:
+ return STRING
+
+ if __debug__:
+ try:
+ if issubclass(ty, ParamType):
+ raise AssertionError(
+ f"Attempted to use an uninstantiated parameter type ({ty})."
+ )
+ except TypeError:
+ # ty is an instance (correct), so issubclass fails.
+ pass
+
+ return FuncParamType(ty)
+
+
+#: A dummy parameter type that just does nothing. From a user's
+#: perspective this appears to just be the same as `STRING` but
+#: internally no string conversion takes place if the input was bytes.
+#: This is usually useful when working with file paths as they can
+#: appear in bytes and unicode.
+#:
+#: For path related uses the :class:`Path` type is a better choice but
+#: there are situations where an unprocessed type is useful which is why
+#: it is is provided.
+#:
+#: .. versionadded:: 4.0
+UNPROCESSED = UnprocessedParamType()
+
+#: A unicode string parameter type which is the implicit default. This
+#: can also be selected by using ``str`` as type.
+STRING = StringParamType()
+
+#: An integer parameter. This can also be selected by using ``int`` as
+#: type.
+INT = IntParamType()
+
+#: A floating point value parameter. This can also be selected by using
+#: ``float`` as type.
+FLOAT = FloatParamType()
+
+#: A boolean parameter. This is the default for boolean flags. This can
+#: also be selected by using ``bool`` as a type.
+BOOL = BoolParamType()
+
+#: A UUID parameter.
+UUID = UUIDParameterType()
+
+
+class OptionHelpExtra(t.TypedDict, total=False):
+ envvars: tuple[str, ...]
+ default: str
+ range: str
+ required: str
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/click/utils.py b/stripe_config/.venv/lib/python3.12/site-packages/click/utils.py
new file mode 100644
index 0000000..beae26f
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/click/utils.py
@@ -0,0 +1,627 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import os
+import re
+import sys
+import typing as t
+from functools import update_wrapper
+from types import ModuleType
+from types import TracebackType
+
+from ._compat import _default_text_stderr
+from ._compat import _default_text_stdout
+from ._compat import _find_binary_writer
+from ._compat import auto_wrap_for_ansi
+from ._compat import binary_streams
+from ._compat import open_stream
+from ._compat import should_strip_ansi
+from ._compat import strip_ansi
+from ._compat import text_streams
+from ._compat import WIN
+from .globals import resolve_color_default
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ P = te.ParamSpec("P")
+
+R = t.TypeVar("R")
+
+
+def _posixify(name: str) -> str:
+ return "-".join(name.split()).lower()
+
+
+def safecall(func: t.Callable[P, R]) -> t.Callable[P, R | None]:
+ """Wraps a function so that it swallows exceptions."""
+
+ def wrapper(*args: P.args, **kwargs: P.kwargs) -> R | None:
+ try:
+ return func(*args, **kwargs)
+ except Exception:
+ pass
+ return None
+
+ return update_wrapper(wrapper, func)
+
+
+def make_str(value: t.Any) -> str:
+ """Converts a value into a valid string."""
+ if isinstance(value, bytes):
+ try:
+ return value.decode(sys.getfilesystemencoding())
+ except UnicodeError:
+ return value.decode("utf-8", "replace")
+ return str(value)
+
+
+def make_default_short_help(help: str, max_length: int = 45) -> str:
+ """Returns a condensed version of help string."""
+ # Consider only the first paragraph.
+ paragraph_end = help.find("\n\n")
+
+ if paragraph_end != -1:
+ help = help[:paragraph_end]
+
+ # Collapse newlines, tabs, and spaces.
+ words = help.split()
+
+ if not words:
+ return ""
+
+ # The first paragraph started with a "no rewrap" marker, ignore it.
+ if words[0] == "\b":
+ words = words[1:]
+
+ total_length = 0
+ last_index = len(words) - 1
+
+ for i, word in enumerate(words):
+ total_length += len(word) + (i > 0)
+
+ if total_length > max_length: # too long, truncate
+ break
+
+ if word[-1] == ".": # sentence end, truncate without "..."
+ return " ".join(words[: i + 1])
+
+ if total_length == max_length and i != last_index:
+ break # not at sentence end, truncate with "..."
+ else:
+ return " ".join(words) # no truncation needed
+
+ # Account for the length of the suffix.
+ total_length += len("...")
+
+ # remove words until the length is short enough
+ while i > 0:
+ total_length -= len(words[i]) + (i > 0)
+
+ if total_length <= max_length:
+ break
+
+ i -= 1
+
+ return " ".join(words[:i]) + "..."
+
+
+class LazyFile:
+ """A lazy file works like a regular file but it does not fully open
+ the file but it does perform some basic checks early to see if the
+ filename parameter does make sense. This is useful for safely opening
+ files for writing.
+ """
+
+ def __init__(
+ self,
+ filename: str | os.PathLike[str],
+ mode: str = "r",
+ encoding: str | None = None,
+ errors: str | None = "strict",
+ atomic: bool = False,
+ ):
+ self.name: str = os.fspath(filename)
+ self.mode = mode
+ self.encoding = encoding
+ self.errors = errors
+ self.atomic = atomic
+ self._f: t.IO[t.Any] | None
+ self.should_close: bool
+
+ if self.name == "-":
+ self._f, self.should_close = open_stream(filename, mode, encoding, errors)
+ else:
+ if "r" in mode:
+ # Open and close the file in case we're opening it for
+ # reading so that we can catch at least some errors in
+ # some cases early.
+ open(filename, mode).close()
+ self._f = None
+ self.should_close = True
+
+ def __getattr__(self, name: str) -> t.Any:
+ return getattr(self.open(), name)
+
+ def __repr__(self) -> str:
+ if self._f is not None:
+ return repr(self._f)
+ return f""
+
+ def open(self) -> t.IO[t.Any]:
+ """Opens the file if it's not yet open. This call might fail with
+ a :exc:`FileError`. Not handling this error will produce an error
+ that Click shows.
+ """
+ if self._f is not None:
+ return self._f
+ try:
+ rv, self.should_close = open_stream(
+ self.name, self.mode, self.encoding, self.errors, atomic=self.atomic
+ )
+ except OSError as e:
+ from .exceptions import FileError
+
+ raise FileError(self.name, hint=e.strerror) from e
+ self._f = rv
+ return rv
+
+ def close(self) -> None:
+ """Closes the underlying file, no matter what."""
+ if self._f is not None:
+ self._f.close()
+
+ def close_intelligently(self) -> None:
+ """This function only closes the file if it was opened by the lazy
+ file wrapper. For instance this will never close stdin.
+ """
+ if self.should_close:
+ self.close()
+
+ def __enter__(self) -> LazyFile:
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.close_intelligently()
+
+ def __iter__(self) -> cabc.Iterator[t.AnyStr]:
+ self.open()
+ return iter(self._f) # type: ignore
+
+
+class KeepOpenFile:
+ def __init__(self, file: t.IO[t.Any]) -> None:
+ self._file: t.IO[t.Any] = file
+
+ def __getattr__(self, name: str) -> t.Any:
+ return getattr(self._file, name)
+
+ def __enter__(self) -> KeepOpenFile:
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type[BaseException] | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ pass
+
+ def __repr__(self) -> str:
+ return repr(self._file)
+
+ def __iter__(self) -> cabc.Iterator[t.AnyStr]:
+ return iter(self._file)
+
+
+def echo(
+ message: t.Any | None = None,
+ file: t.IO[t.Any] | None = None,
+ nl: bool = True,
+ err: bool = False,
+ color: bool | None = None,
+) -> None:
+ """Print a message and newline to stdout or a file. This should be
+ used instead of :func:`print` because it provides better support
+ for different data, files, and environments.
+
+ Compared to :func:`print`, this does the following:
+
+ - Ensures that the output encoding is not misconfigured on Linux.
+ - Supports Unicode in the Windows console.
+ - Supports writing to binary outputs, and supports writing bytes
+ to text outputs.
+ - Supports colors and styles on Windows.
+ - Removes ANSI color and style codes if the output does not look
+ like an interactive terminal.
+ - Always flushes the output.
+
+ :param message: The string or bytes to output. Other objects are
+ converted to strings.
+ :param file: The file to write to. Defaults to ``stdout``.
+ :param err: Write to ``stderr`` instead of ``stdout``.
+ :param nl: Print a newline after the message. Enabled by default.
+ :param color: Force showing or hiding colors and other styles. By
+ default Click will remove color if the output does not look like
+ an interactive terminal.
+
+ .. versionchanged:: 6.0
+ Support Unicode output on the Windows console. Click does not
+ modify ``sys.stdout``, so ``sys.stdout.write()`` and ``print()``
+ will still not support Unicode.
+
+ .. versionchanged:: 4.0
+ Added the ``color`` parameter.
+
+ .. versionadded:: 3.0
+ Added the ``err`` parameter.
+
+ .. versionchanged:: 2.0
+ Support colors on Windows if colorama is installed.
+ """
+ if file is None:
+ if err:
+ file = _default_text_stderr()
+ else:
+ file = _default_text_stdout()
+
+ # There are no standard streams attached to write to. For example,
+ # pythonw on Windows.
+ if file is None:
+ return
+
+ # Convert non bytes/text into the native string type.
+ if message is not None and not isinstance(message, (str, bytes, bytearray)):
+ out: str | bytes | bytearray | None = str(message)
+ else:
+ out = message
+
+ if nl:
+ out = out or ""
+ if isinstance(out, str):
+ out += "\n"
+ else:
+ out += b"\n"
+
+ if not out:
+ file.flush()
+ return
+
+ # If there is a message and the value looks like bytes, we manually
+ # need to find the binary stream and write the message in there.
+ # This is done separately so that most stream types will work as you
+ # would expect. Eg: you can write to StringIO for other cases.
+ if isinstance(out, (bytes, bytearray)):
+ binary_file = _find_binary_writer(file)
+
+ if binary_file is not None:
+ file.flush()
+ binary_file.write(out)
+ binary_file.flush()
+ return
+
+ # ANSI style code support. For no message or bytes, nothing happens.
+ # When outputting to a file instead of a terminal, strip codes.
+ else:
+ color = resolve_color_default(color)
+
+ if should_strip_ansi(file, color):
+ out = strip_ansi(out)
+ elif WIN:
+ if auto_wrap_for_ansi is not None:
+ file = auto_wrap_for_ansi(file, color) # type: ignore
+ elif not color:
+ out = strip_ansi(out)
+
+ file.write(out) # type: ignore
+ file.flush()
+
+
+def get_binary_stream(name: t.Literal["stdin", "stdout", "stderr"]) -> t.BinaryIO:
+ """Returns a system stream for byte processing.
+
+ :param name: the name of the stream to open. Valid names are ``'stdin'``,
+ ``'stdout'`` and ``'stderr'``
+ """
+ opener = binary_streams.get(name)
+ if opener is None:
+ raise TypeError(f"Unknown standard stream '{name}'")
+ return opener()
+
+
+def get_text_stream(
+ name: t.Literal["stdin", "stdout", "stderr"],
+ encoding: str | None = None,
+ errors: str | None = "strict",
+) -> t.TextIO:
+ """Returns a system stream for text processing. This usually returns
+ a wrapped stream around a binary stream returned from
+ :func:`get_binary_stream` but it also can take shortcuts for already
+ correctly configured streams.
+
+ :param name: the name of the stream to open. Valid names are ``'stdin'``,
+ ``'stdout'`` and ``'stderr'``
+ :param encoding: overrides the detected default encoding.
+ :param errors: overrides the default error mode.
+ """
+ opener = text_streams.get(name)
+ if opener is None:
+ raise TypeError(f"Unknown standard stream '{name}'")
+ return opener(encoding, errors)
+
+
+def open_file(
+ filename: str | os.PathLike[str],
+ mode: str = "r",
+ encoding: str | None = None,
+ errors: str | None = "strict",
+ lazy: bool = False,
+ atomic: bool = False,
+) -> t.IO[t.Any]:
+ """Open a file, with extra behavior to handle ``'-'`` to indicate
+ a standard stream, lazy open on write, and atomic write. Similar to
+ the behavior of the :class:`~click.File` param type.
+
+ If ``'-'`` is given to open ``stdout`` or ``stdin``, the stream is
+ wrapped so that using it in a context manager will not close it.
+ This makes it possible to use the function without accidentally
+ closing a standard stream:
+
+ .. code-block:: python
+
+ with open_file(filename) as f:
+ ...
+
+ :param filename: The name or Path of the file to open, or ``'-'`` for
+ ``stdin``/``stdout``.
+ :param mode: The mode in which to open the file.
+ :param encoding: The encoding to decode or encode a file opened in
+ text mode.
+ :param errors: The error handling mode.
+ :param lazy: Wait to open the file until it is accessed. For read
+ mode, the file is temporarily opened to raise access errors
+ early, then closed until it is read again.
+ :param atomic: Write to a temporary file and replace the given file
+ on close.
+
+ .. versionadded:: 3.0
+ """
+ if lazy:
+ return t.cast(
+ "t.IO[t.Any]", LazyFile(filename, mode, encoding, errors, atomic=atomic)
+ )
+
+ f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic)
+
+ if not should_close:
+ f = t.cast("t.IO[t.Any]", KeepOpenFile(f))
+
+ return f
+
+
+def format_filename(
+ filename: str | bytes | os.PathLike[str] | os.PathLike[bytes],
+ shorten: bool = False,
+) -> str:
+ """Format a filename as a string for display. Ensures the filename can be
+ displayed by replacing any invalid bytes or surrogate escapes in the name
+ with the replacement character ``�``.
+
+ Invalid bytes or surrogate escapes will raise an error when written to a
+ stream with ``errors="strict"``. This will typically happen with ``stdout``
+ when the locale is something like ``en_GB.UTF-8``.
+
+ Many scenarios *are* safe to write surrogates though, due to PEP 538 and
+ PEP 540, including:
+
+ - Writing to ``stderr``, which uses ``errors="backslashreplace"``.
+ - The system has ``LANG=C.UTF-8``, ``C``, or ``POSIX``. Python opens
+ stdout and stderr with ``errors="surrogateescape"``.
+ - None of ``LANG/LC_*`` are set. Python assumes ``LANG=C.UTF-8``.
+ - Python is started in UTF-8 mode with ``PYTHONUTF8=1`` or ``-X utf8``.
+ Python opens stdout and stderr with ``errors="surrogateescape"``.
+
+ :param filename: formats a filename for UI display. This will also convert
+ the filename into unicode without failing.
+ :param shorten: this optionally shortens the filename to strip of the
+ path that leads up to it.
+ """
+ if shorten:
+ filename = os.path.basename(filename)
+ else:
+ filename = os.fspath(filename)
+
+ if isinstance(filename, bytes):
+ filename = filename.decode(sys.getfilesystemencoding(), "replace")
+ else:
+ filename = filename.encode("utf-8", "surrogateescape").decode(
+ "utf-8", "replace"
+ )
+
+ return filename
+
+
+def get_app_dir(app_name: str, roaming: bool = True, force_posix: bool = False) -> str:
+ r"""Returns the config folder for the application. The default behavior
+ is to return whatever is most appropriate for the operating system.
+
+ To give you an idea, for an app called ``"Foo Bar"``, something like
+ the following folders could be returned:
+
+ Mac OS X:
+ ``~/Library/Application Support/Foo Bar``
+ Mac OS X (POSIX):
+ ``~/.foo-bar``
+ Unix:
+ ``~/.config/foo-bar``
+ Unix (POSIX):
+ ``~/.foo-bar``
+ Windows (roaming):
+ ``C:\Users\\AppData\Roaming\Foo Bar``
+ Windows (not roaming):
+ ``C:\Users\\AppData\Local\Foo Bar``
+
+ .. versionadded:: 2.0
+
+ :param app_name: the application name. This should be properly capitalized
+ and can contain whitespace.
+ :param roaming: controls if the folder should be roaming or not on Windows.
+ Has no effect otherwise.
+ :param force_posix: if this is set to `True` then on any POSIX system the
+ folder will be stored in the home folder with a leading
+ dot instead of the XDG config home or darwin's
+ application support folder.
+ """
+ if WIN:
+ key = "APPDATA" if roaming else "LOCALAPPDATA"
+ folder = os.environ.get(key)
+ if folder is None:
+ folder = os.path.expanduser("~")
+ return os.path.join(folder, app_name)
+ if force_posix:
+ return os.path.join(os.path.expanduser(f"~/.{_posixify(app_name)}"))
+ if sys.platform == "darwin":
+ return os.path.join(
+ os.path.expanduser("~/Library/Application Support"), app_name
+ )
+ return os.path.join(
+ os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")),
+ _posixify(app_name),
+ )
+
+
+class PacifyFlushWrapper:
+ """This wrapper is used to catch and suppress BrokenPipeErrors resulting
+ from ``.flush()`` being called on broken pipe during the shutdown/final-GC
+ of the Python interpreter. Notably ``.flush()`` is always called on
+ ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any
+ other cleanup code, and the case where the underlying file is not a broken
+ pipe, all calls and attributes are proxied.
+ """
+
+ def __init__(self, wrapped: t.IO[t.Any]) -> None:
+ self.wrapped = wrapped
+
+ def flush(self) -> None:
+ try:
+ self.wrapped.flush()
+ except OSError as e:
+ import errno
+
+ if e.errno != errno.EPIPE:
+ raise
+
+ def __getattr__(self, attr: str) -> t.Any:
+ return getattr(self.wrapped, attr)
+
+
+def _detect_program_name(
+ path: str | None = None, _main: ModuleType | None = None
+) -> str:
+ """Determine the command used to run the program, for use in help
+ text. If a file or entry point was executed, the file name is
+ returned. If ``python -m`` was used to execute a module or package,
+ ``python -m name`` is returned.
+
+ This doesn't try to be too precise, the goal is to give a concise
+ name for help text. Files are only shown as their name without the
+ path. ``python`` is only shown for modules, and the full path to
+ ``sys.executable`` is not shown.
+
+ :param path: The Python file being executed. Python puts this in
+ ``sys.argv[0]``, which is used by default.
+ :param _main: The ``__main__`` module. This should only be passed
+ during internal testing.
+
+ .. versionadded:: 8.0
+ Based on command args detection in the Werkzeug reloader.
+
+ :meta private:
+ """
+ if _main is None:
+ _main = sys.modules["__main__"]
+
+ if not path:
+ path = sys.argv[0]
+
+ # The value of __package__ indicates how Python was called. It may
+ # not exist if a setuptools script is installed as an egg. It may be
+ # set incorrectly for entry points created with pip on Windows.
+ # It is set to "" inside a Shiv or PEX zipapp.
+ if getattr(_main, "__package__", None) in {None, ""} or (
+ os.name == "nt"
+ and _main.__package__ == ""
+ and not os.path.exists(path)
+ and os.path.exists(f"{path}.exe")
+ ):
+ # Executed a file, like "python app.py".
+ return os.path.basename(path)
+
+ # Executed a module, like "python -m example".
+ # Rewritten by Python from "-m script" to "/path/to/script.py".
+ # Need to look at main module to determine how it was executed.
+ py_module = t.cast(str, _main.__package__)
+ name = os.path.splitext(os.path.basename(path))[0]
+
+ # A submodule like "example.cli".
+ if name != "__main__":
+ py_module = f"{py_module}.{name}"
+
+ return f"python -m {py_module.lstrip('.')}"
+
+
+def _expand_args(
+ args: cabc.Iterable[str],
+ *,
+ user: bool = True,
+ env: bool = True,
+ glob_recursive: bool = True,
+) -> list[str]:
+ """Simulate Unix shell expansion with Python functions.
+
+ See :func:`glob.glob`, :func:`os.path.expanduser`, and
+ :func:`os.path.expandvars`.
+
+ This is intended for use on Windows, where the shell does not do any
+ expansion. It may not exactly match what a Unix shell would do.
+
+ :param args: List of command line arguments to expand.
+ :param user: Expand user home directory.
+ :param env: Expand environment variables.
+ :param glob_recursive: ``**`` matches directories recursively.
+
+ .. versionchanged:: 8.1
+ Invalid glob patterns are treated as empty expansions rather
+ than raising an error.
+
+ .. versionadded:: 8.0
+
+ :meta private:
+ """
+ from glob import glob
+
+ out = []
+
+ for arg in args:
+ if user:
+ arg = os.path.expanduser(arg)
+
+ if env:
+ arg = os.path.expandvars(arg)
+
+ try:
+ matches = glob(arg, recursive=glob_recursive)
+ except re.error:
+ matches = []
+
+ if not matches:
+ out.append(arg)
+ else:
+ out.extend(matches)
+
+ return out
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__init__.py
new file mode 100644
index 0000000..7f4c631
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__init__.py
@@ -0,0 +1,49 @@
+from typing import Any, Optional
+
+from .main import (dotenv_values, find_dotenv, get_key, load_dotenv, set_key,
+ unset_key)
+
+
+def load_ipython_extension(ipython: Any) -> None:
+ from .ipython import load_ipython_extension
+ load_ipython_extension(ipython)
+
+
+def get_cli_string(
+ path: Optional[str] = None,
+ action: Optional[str] = None,
+ key: Optional[str] = None,
+ value: Optional[str] = None,
+ quote: Optional[str] = None,
+):
+ """Returns a string suitable for running as a shell script.
+
+ Useful for converting a arguments passed to a fabric task
+ to be passed to a `local` or `run` command.
+ """
+ command = ['dotenv']
+ if quote:
+ command.append(f'-q {quote}')
+ if path:
+ command.append(f'-f {path}')
+ if action:
+ command.append(action)
+ if key:
+ command.append(key)
+ if value:
+ if ' ' in value:
+ command.append(f'"{value}"')
+ else:
+ command.append(value)
+
+ return ' '.join(command).strip()
+
+
+__all__ = ['get_cli_string',
+ 'load_dotenv',
+ 'dotenv_values',
+ 'get_key',
+ 'set_key',
+ 'unset_key',
+ 'find_dotenv',
+ 'load_ipython_extension']
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__main__.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__main__.py
new file mode 100644
index 0000000..3977f55
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__main__.py
@@ -0,0 +1,6 @@
+"""Entry point for cli, enables execution with `python -m dotenv`"""
+
+from .cli import cli
+
+if __name__ == "__main__":
+ cli()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..70de42f
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__main__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__main__.cpython-312.pyc
new file mode 100644
index 0000000..e9ff056
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/__main__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/cli.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/cli.cpython-312.pyc
new file mode 100644
index 0000000..7931a43
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/cli.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/ipython.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/ipython.cpython-312.pyc
new file mode 100644
index 0000000..6f665bb
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/ipython.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/main.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/main.cpython-312.pyc
new file mode 100644
index 0000000..f82520c
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/main.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/parser.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/parser.cpython-312.pyc
new file mode 100644
index 0000000..d0347f8
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/parser.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/variables.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/variables.cpython-312.pyc
new file mode 100644
index 0000000..4a6a05c
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/variables.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/version.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/version.cpython-312.pyc
new file mode 100644
index 0000000..0b89928
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/__pycache__/version.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/cli.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/cli.py
new file mode 100644
index 0000000..65ead46
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/cli.py
@@ -0,0 +1,199 @@
+import json
+import os
+import shlex
+import sys
+from contextlib import contextmanager
+from subprocess import Popen
+from typing import Any, Dict, IO, Iterator, List
+
+try:
+ import click
+except ImportError:
+ sys.stderr.write('It seems python-dotenv is not installed with cli option. \n'
+ 'Run pip install "python-dotenv[cli]" to fix this.')
+ sys.exit(1)
+
+from .main import dotenv_values, set_key, unset_key
+from .version import __version__
+
+
+def enumerate_env():
+ """
+ Return a path for the ${pwd}/.env file.
+
+ If pwd does not exist, return None.
+ """
+ try:
+ cwd = os.getcwd()
+ except FileNotFoundError:
+ return None
+ path = os.path.join(cwd, '.env')
+ return path
+
+
+@click.group()
+@click.option('-f', '--file', default=enumerate_env(),
+ type=click.Path(file_okay=True),
+ help="Location of the .env file, defaults to .env file in current working directory.")
+@click.option('-q', '--quote', default='always',
+ type=click.Choice(['always', 'never', 'auto']),
+ help="Whether to quote or not the variable values. Default mode is always. This does not affect parsing.")
+@click.option('-e', '--export', default=False,
+ type=click.BOOL,
+ help="Whether to write the dot file as an executable bash script.")
+@click.version_option(version=__version__)
+@click.pass_context
+def cli(ctx: click.Context, file: Any, quote: Any, export: Any) -> None:
+ """This script is used to set, get or unset values from a .env file."""
+ ctx.obj = {'QUOTE': quote, 'EXPORT': export, 'FILE': file}
+
+
+@contextmanager
+def stream_file(path: os.PathLike) -> Iterator[IO[str]]:
+ """
+ Open a file and yield the corresponding (decoded) stream.
+
+ Exits with error code 2 if the file cannot be opened.
+ """
+
+ try:
+ with open(path) as stream:
+ yield stream
+ except OSError as exc:
+ print(f"Error opening env file: {exc}", file=sys.stderr)
+ exit(2)
+
+
+@cli.command()
+@click.pass_context
+@click.option('--format', default='simple',
+ type=click.Choice(['simple', 'json', 'shell', 'export']),
+ help="The format in which to display the list. Default format is simple, "
+ "which displays name=value without quotes.")
+def list(ctx: click.Context, format: bool) -> None:
+ """Display all the stored key/value."""
+ file = ctx.obj['FILE']
+
+ with stream_file(file) as stream:
+ values = dotenv_values(stream=stream)
+
+ if format == 'json':
+ click.echo(json.dumps(values, indent=2, sort_keys=True))
+ else:
+ prefix = 'export ' if format == 'export' else ''
+ for k in sorted(values):
+ v = values[k]
+ if v is not None:
+ if format in ('export', 'shell'):
+ v = shlex.quote(v)
+ click.echo(f'{prefix}{k}={v}')
+
+
+@cli.command()
+@click.pass_context
+@click.argument('key', required=True)
+@click.argument('value', required=True)
+def set(ctx: click.Context, key: Any, value: Any) -> None:
+ """Store the given key/value."""
+ file = ctx.obj['FILE']
+ quote = ctx.obj['QUOTE']
+ export = ctx.obj['EXPORT']
+ success, key, value = set_key(file, key, value, quote, export)
+ if success:
+ click.echo(f'{key}={value}')
+ else:
+ exit(1)
+
+
+@cli.command()
+@click.pass_context
+@click.argument('key', required=True)
+def get(ctx: click.Context, key: Any) -> None:
+ """Retrieve the value for the given key."""
+ file = ctx.obj['FILE']
+
+ with stream_file(file) as stream:
+ values = dotenv_values(stream=stream)
+
+ stored_value = values.get(key)
+ if stored_value:
+ click.echo(stored_value)
+ else:
+ exit(1)
+
+
+@cli.command()
+@click.pass_context
+@click.argument('key', required=True)
+def unset(ctx: click.Context, key: Any) -> None:
+ """Removes the given key."""
+ file = ctx.obj['FILE']
+ quote = ctx.obj['QUOTE']
+ success, key = unset_key(file, key, quote)
+ if success:
+ click.echo(f"Successfully removed {key}")
+ else:
+ exit(1)
+
+
+@cli.command(context_settings={'ignore_unknown_options': True})
+@click.pass_context
+@click.option(
+ "--override/--no-override",
+ default=True,
+ help="Override variables from the environment file with those from the .env file.",
+)
+@click.argument('commandline', nargs=-1, type=click.UNPROCESSED)
+def run(ctx: click.Context, override: bool, commandline: List[str]) -> None:
+ """Run command with environment variables present."""
+ file = ctx.obj['FILE']
+ if not os.path.isfile(file):
+ raise click.BadParameter(
+ f'Invalid value for \'-f\' "{file}" does not exist.',
+ ctx=ctx
+ )
+ dotenv_as_dict = {
+ k: v
+ for (k, v) in dotenv_values(file).items()
+ if v is not None and (override or k not in os.environ)
+ }
+
+ if not commandline:
+ click.echo('No command given.')
+ exit(1)
+ ret = run_command(commandline, dotenv_as_dict)
+ exit(ret)
+
+
+def run_command(command: List[str], env: Dict[str, str]) -> int:
+ """Run command in sub process.
+
+ Runs the command in a sub process with the variables from `env`
+ added in the current environment variables.
+
+ Parameters
+ ----------
+ command: List[str]
+ The command and it's parameters
+ env: Dict
+ The additional environment variables
+
+ Returns
+ -------
+ int
+ The return code of the command
+
+ """
+ # copy the current environment variables and add the vales from
+ # `env`
+ cmd_env = os.environ.copy()
+ cmd_env.update(env)
+
+ p = Popen(command,
+ universal_newlines=True,
+ bufsize=0,
+ shell=False,
+ env=cmd_env)
+ _, _ = p.communicate()
+
+ return p.returncode
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/ipython.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/ipython.py
new file mode 100644
index 0000000..7df727c
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/ipython.py
@@ -0,0 +1,39 @@
+from IPython.core.magic import Magics, line_magic, magics_class # type: ignore
+from IPython.core.magic_arguments import (argument, magic_arguments, # type: ignore
+ parse_argstring) # type: ignore
+
+from .main import find_dotenv, load_dotenv
+
+
+@magics_class
+class IPythonDotEnv(Magics):
+
+ @magic_arguments()
+ @argument(
+ '-o', '--override', action='store_true',
+ help="Indicate to override existing variables"
+ )
+ @argument(
+ '-v', '--verbose', action='store_true',
+ help="Indicate function calls to be verbose"
+ )
+ @argument('dotenv_path', nargs='?', type=str, default='.env',
+ help='Search in increasingly higher folders for the `dotenv_path`')
+ @line_magic
+ def dotenv(self, line):
+ args = parse_argstring(self.dotenv, line)
+ # Locate the .env file
+ dotenv_path = args.dotenv_path
+ try:
+ dotenv_path = find_dotenv(dotenv_path, True, True)
+ except IOError:
+ print("cannot find .env file")
+ return
+
+ # Load the .env file
+ load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
+
+
+def load_ipython_extension(ipython):
+ """Register the %dotenv magic."""
+ ipython.register_magics(IPythonDotEnv)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/main.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/main.py
new file mode 100644
index 0000000..7bc5428
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/main.py
@@ -0,0 +1,392 @@
+import io
+import logging
+import os
+import pathlib
+import shutil
+import sys
+import tempfile
+from collections import OrderedDict
+from contextlib import contextmanager
+from typing import (IO, Dict, Iterable, Iterator, Mapping, Optional, Tuple,
+ Union)
+
+from .parser import Binding, parse_stream
+from .variables import parse_variables
+
+# A type alias for a string path to be used for the paths in this file.
+# These paths may flow to `open()` and `shutil.move()`; `shutil.move()`
+# only accepts string paths, not byte paths or file descriptors. See
+# https://github.com/python/typeshed/pull/6832.
+StrPath = Union[str, 'os.PathLike[str]']
+
+logger = logging.getLogger(__name__)
+
+
+def with_warn_for_invalid_lines(mappings: Iterator[Binding]) -> Iterator[Binding]:
+ for mapping in mappings:
+ if mapping.error:
+ logger.warning(
+ "Python-dotenv could not parse statement starting at line %s",
+ mapping.original.line,
+ )
+ yield mapping
+
+
+class DotEnv:
+ def __init__(
+ self,
+ dotenv_path: Optional[StrPath],
+ stream: Optional[IO[str]] = None,
+ verbose: bool = False,
+ encoding: Optional[str] = None,
+ interpolate: bool = True,
+ override: bool = True,
+ ) -> None:
+ self.dotenv_path: Optional[StrPath] = dotenv_path
+ self.stream: Optional[IO[str]] = stream
+ self._dict: Optional[Dict[str, Optional[str]]] = None
+ self.verbose: bool = verbose
+ self.encoding: Optional[str] = encoding
+ self.interpolate: bool = interpolate
+ self.override: bool = override
+
+ @contextmanager
+ def _get_stream(self) -> Iterator[IO[str]]:
+ if self.dotenv_path and os.path.isfile(self.dotenv_path):
+ with open(self.dotenv_path, encoding=self.encoding) as stream:
+ yield stream
+ elif self.stream is not None:
+ yield self.stream
+ else:
+ if self.verbose:
+ logger.info(
+ "Python-dotenv could not find configuration file %s.",
+ self.dotenv_path or '.env',
+ )
+ yield io.StringIO('')
+
+ def dict(self) -> Dict[str, Optional[str]]:
+ """Return dotenv as dict"""
+ if self._dict:
+ return self._dict
+
+ raw_values = self.parse()
+
+ if self.interpolate:
+ self._dict = OrderedDict(resolve_variables(raw_values, override=self.override))
+ else:
+ self._dict = OrderedDict(raw_values)
+
+ return self._dict
+
+ def parse(self) -> Iterator[Tuple[str, Optional[str]]]:
+ with self._get_stream() as stream:
+ for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
+ if mapping.key is not None:
+ yield mapping.key, mapping.value
+
+ def set_as_environment_variables(self) -> bool:
+ """
+ Load the current dotenv as system environment variable.
+ """
+ if not self.dict():
+ return False
+
+ for k, v in self.dict().items():
+ if k in os.environ and not self.override:
+ continue
+ if v is not None:
+ os.environ[k] = v
+
+ return True
+
+ def get(self, key: str) -> Optional[str]:
+ """
+ """
+ data = self.dict()
+
+ if key in data:
+ return data[key]
+
+ if self.verbose:
+ logger.warning("Key %s not found in %s.", key, self.dotenv_path)
+
+ return None
+
+
+def get_key(
+ dotenv_path: StrPath,
+ key_to_get: str,
+ encoding: Optional[str] = "utf-8",
+) -> Optional[str]:
+ """
+ Get the value of a given key from the given .env.
+
+ Returns `None` if the key isn't found or doesn't have a value.
+ """
+ return DotEnv(dotenv_path, verbose=True, encoding=encoding).get(key_to_get)
+
+
+@contextmanager
+def rewrite(
+ path: StrPath,
+ encoding: Optional[str],
+) -> Iterator[Tuple[IO[str], IO[str]]]:
+ pathlib.Path(path).touch()
+
+ with tempfile.NamedTemporaryFile(mode="w", encoding=encoding, delete=False) as dest:
+ error = None
+ try:
+ with open(path, encoding=encoding) as source:
+ yield (source, dest)
+ except BaseException as err:
+ error = err
+
+ if error is None:
+ shutil.move(dest.name, path)
+ else:
+ os.unlink(dest.name)
+ raise error from None
+
+
+def set_key(
+ dotenv_path: StrPath,
+ key_to_set: str,
+ value_to_set: str,
+ quote_mode: str = "always",
+ export: bool = False,
+ encoding: Optional[str] = "utf-8",
+) -> Tuple[Optional[bool], str, str]:
+ """
+ Adds or Updates a key/value to the given .env
+
+ If the .env path given doesn't exist, fails instead of risking creating
+ an orphan .env somewhere in the filesystem
+ """
+ if quote_mode not in ("always", "auto", "never"):
+ raise ValueError(f"Unknown quote_mode: {quote_mode}")
+
+ quote = (
+ quote_mode == "always"
+ or (quote_mode == "auto" and not value_to_set.isalnum())
+ )
+
+ if quote:
+ value_out = "'{}'".format(value_to_set.replace("'", "\\'"))
+ else:
+ value_out = value_to_set
+ if export:
+ line_out = f'export {key_to_set}={value_out}\n'
+ else:
+ line_out = f"{key_to_set}={value_out}\n"
+
+ with rewrite(dotenv_path, encoding=encoding) as (source, dest):
+ replaced = False
+ missing_newline = False
+ for mapping in with_warn_for_invalid_lines(parse_stream(source)):
+ if mapping.key == key_to_set:
+ dest.write(line_out)
+ replaced = True
+ else:
+ dest.write(mapping.original.string)
+ missing_newline = not mapping.original.string.endswith("\n")
+ if not replaced:
+ if missing_newline:
+ dest.write("\n")
+ dest.write(line_out)
+
+ return True, key_to_set, value_to_set
+
+
+def unset_key(
+ dotenv_path: StrPath,
+ key_to_unset: str,
+ quote_mode: str = "always",
+ encoding: Optional[str] = "utf-8",
+) -> Tuple[Optional[bool], str]:
+ """
+ Removes a given key from the given `.env` file.
+
+ If the .env path given doesn't exist, fails.
+ If the given key doesn't exist in the .env, fails.
+ """
+ if not os.path.exists(dotenv_path):
+ logger.warning("Can't delete from %s - it doesn't exist.", dotenv_path)
+ return None, key_to_unset
+
+ removed = False
+ with rewrite(dotenv_path, encoding=encoding) as (source, dest):
+ for mapping in with_warn_for_invalid_lines(parse_stream(source)):
+ if mapping.key == key_to_unset:
+ removed = True
+ else:
+ dest.write(mapping.original.string)
+
+ if not removed:
+ logger.warning("Key %s not removed from %s - key doesn't exist.", key_to_unset, dotenv_path)
+ return None, key_to_unset
+
+ return removed, key_to_unset
+
+
+def resolve_variables(
+ values: Iterable[Tuple[str, Optional[str]]],
+ override: bool,
+) -> Mapping[str, Optional[str]]:
+ new_values: Dict[str, Optional[str]] = {}
+
+ for (name, value) in values:
+ if value is None:
+ result = None
+ else:
+ atoms = parse_variables(value)
+ env: Dict[str, Optional[str]] = {}
+ if override:
+ env.update(os.environ) # type: ignore
+ env.update(new_values)
+ else:
+ env.update(new_values)
+ env.update(os.environ) # type: ignore
+ result = "".join(atom.resolve(env) for atom in atoms)
+
+ new_values[name] = result
+
+ return new_values
+
+
+def _walk_to_root(path: str) -> Iterator[str]:
+ """
+ Yield directories starting from the given directory up to the root
+ """
+ if not os.path.exists(path):
+ raise IOError('Starting path not found')
+
+ if os.path.isfile(path):
+ path = os.path.dirname(path)
+
+ last_dir = None
+ current_dir = os.path.abspath(path)
+ while last_dir != current_dir:
+ yield current_dir
+ parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
+ last_dir, current_dir = current_dir, parent_dir
+
+
+def find_dotenv(
+ filename: str = '.env',
+ raise_error_if_not_found: bool = False,
+ usecwd: bool = False,
+) -> str:
+ """
+ Search in increasingly higher folders for the given file
+
+ Returns path to the file if found, or an empty string otherwise
+ """
+
+ def _is_interactive():
+ """ Decide whether this is running in a REPL or IPython notebook """
+ try:
+ main = __import__('__main__', None, None, fromlist=['__file__'])
+ except ModuleNotFoundError:
+ return False
+ return not hasattr(main, '__file__')
+
+ if usecwd or _is_interactive() or getattr(sys, 'frozen', False):
+ # Should work without __file__, e.g. in REPL or IPython notebook.
+ path = os.getcwd()
+ else:
+ # will work for .py files
+ frame = sys._getframe()
+ current_file = __file__
+
+ while frame.f_code.co_filename == current_file or not os.path.exists(
+ frame.f_code.co_filename
+ ):
+ assert frame.f_back is not None
+ frame = frame.f_back
+ frame_filename = frame.f_code.co_filename
+ path = os.path.dirname(os.path.abspath(frame_filename))
+
+ for dirname in _walk_to_root(path):
+ check_path = os.path.join(dirname, filename)
+ if os.path.isfile(check_path):
+ return check_path
+
+ if raise_error_if_not_found:
+ raise IOError('File not found')
+
+ return ''
+
+
+def load_dotenv(
+ dotenv_path: Optional[StrPath] = None,
+ stream: Optional[IO[str]] = None,
+ verbose: bool = False,
+ override: bool = False,
+ interpolate: bool = True,
+ encoding: Optional[str] = "utf-8",
+) -> bool:
+ """Parse a .env file and then load all the variables found as environment variables.
+
+ Parameters:
+ dotenv_path: Absolute or relative path to .env file.
+ stream: Text stream (such as `io.StringIO`) with .env content, used if
+ `dotenv_path` is `None`.
+ verbose: Whether to output a warning the .env file is missing.
+ override: Whether to override the system environment variables with the variables
+ from the `.env` file.
+ encoding: Encoding to be used to read the file.
+ Returns:
+ Bool: True if at least one environment variable is set else False
+
+ If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the
+ .env file.
+ """
+ if dotenv_path is None and stream is None:
+ dotenv_path = find_dotenv()
+
+ dotenv = DotEnv(
+ dotenv_path=dotenv_path,
+ stream=stream,
+ verbose=verbose,
+ interpolate=interpolate,
+ override=override,
+ encoding=encoding,
+ )
+ return dotenv.set_as_environment_variables()
+
+
+def dotenv_values(
+ dotenv_path: Optional[StrPath] = None,
+ stream: Optional[IO[str]] = None,
+ verbose: bool = False,
+ interpolate: bool = True,
+ encoding: Optional[str] = "utf-8",
+) -> Dict[str, Optional[str]]:
+ """
+ Parse a .env file and return its content as a dict.
+
+ The returned dict will have `None` values for keys without values in the .env file.
+ For example, `foo=bar` results in `{"foo": "bar"}` whereas `foo` alone results in
+ `{"foo": None}`
+
+ Parameters:
+ dotenv_path: Absolute or relative path to the .env file.
+ stream: `StringIO` object with .env content, used if `dotenv_path` is `None`.
+ verbose: Whether to output a warning if the .env file is missing.
+ encoding: Encoding to be used to read the file.
+
+ If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the
+ .env file.
+ """
+ if dotenv_path is None and stream is None:
+ dotenv_path = find_dotenv()
+
+ return DotEnv(
+ dotenv_path=dotenv_path,
+ stream=stream,
+ verbose=verbose,
+ interpolate=interpolate,
+ override=True,
+ encoding=encoding,
+ ).dict()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/parser.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/parser.py
new file mode 100644
index 0000000..735f14a
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/parser.py
@@ -0,0 +1,175 @@
+import codecs
+import re
+from typing import (IO, Iterator, Match, NamedTuple, Optional, # noqa:F401
+ Pattern, Sequence, Tuple)
+
+
+def make_regex(string: str, extra_flags: int = 0) -> Pattern[str]:
+ return re.compile(string, re.UNICODE | extra_flags)
+
+
+_newline = make_regex(r"(\r\n|\n|\r)")
+_multiline_whitespace = make_regex(r"\s*", extra_flags=re.MULTILINE)
+_whitespace = make_regex(r"[^\S\r\n]*")
+_export = make_regex(r"(?:export[^\S\r\n]+)?")
+_single_quoted_key = make_regex(r"'([^']+)'")
+_unquoted_key = make_regex(r"([^=\#\s]+)")
+_equal_sign = make_regex(r"(=[^\S\r\n]*)")
+_single_quoted_value = make_regex(r"'((?:\\'|[^'])*)'")
+_double_quoted_value = make_regex(r'"((?:\\"|[^"])*)"')
+_unquoted_value = make_regex(r"([^\r\n]*)")
+_comment = make_regex(r"(?:[^\S\r\n]*#[^\r\n]*)?")
+_end_of_line = make_regex(r"[^\S\r\n]*(?:\r\n|\n|\r|$)")
+_rest_of_line = make_regex(r"[^\r\n]*(?:\r|\n|\r\n)?")
+_double_quote_escapes = make_regex(r"\\[\\'\"abfnrtv]")
+_single_quote_escapes = make_regex(r"\\[\\']")
+
+
+class Original(NamedTuple):
+ string: str
+ line: int
+
+
+class Binding(NamedTuple):
+ key: Optional[str]
+ value: Optional[str]
+ original: Original
+ error: bool
+
+
+class Position:
+ def __init__(self, chars: int, line: int) -> None:
+ self.chars = chars
+ self.line = line
+
+ @classmethod
+ def start(cls) -> "Position":
+ return cls(chars=0, line=1)
+
+ def set(self, other: "Position") -> None:
+ self.chars = other.chars
+ self.line = other.line
+
+ def advance(self, string: str) -> None:
+ self.chars += len(string)
+ self.line += len(re.findall(_newline, string))
+
+
+class Error(Exception):
+ pass
+
+
+class Reader:
+ def __init__(self, stream: IO[str]) -> None:
+ self.string = stream.read()
+ self.position = Position.start()
+ self.mark = Position.start()
+
+ def has_next(self) -> bool:
+ return self.position.chars < len(self.string)
+
+ def set_mark(self) -> None:
+ self.mark.set(self.position)
+
+ def get_marked(self) -> Original:
+ return Original(
+ string=self.string[self.mark.chars:self.position.chars],
+ line=self.mark.line,
+ )
+
+ def peek(self, count: int) -> str:
+ return self.string[self.position.chars:self.position.chars + count]
+
+ def read(self, count: int) -> str:
+ result = self.string[self.position.chars:self.position.chars + count]
+ if len(result) < count:
+ raise Error("read: End of string")
+ self.position.advance(result)
+ return result
+
+ def read_regex(self, regex: Pattern[str]) -> Sequence[str]:
+ match = regex.match(self.string, self.position.chars)
+ if match is None:
+ raise Error("read_regex: Pattern not found")
+ self.position.advance(self.string[match.start():match.end()])
+ return match.groups()
+
+
+def decode_escapes(regex: Pattern[str], string: str) -> str:
+ def decode_match(match: Match[str]) -> str:
+ return codecs.decode(match.group(0), 'unicode-escape') # type: ignore
+
+ return regex.sub(decode_match, string)
+
+
+def parse_key(reader: Reader) -> Optional[str]:
+ char = reader.peek(1)
+ if char == "#":
+ return None
+ elif char == "'":
+ (key,) = reader.read_regex(_single_quoted_key)
+ else:
+ (key,) = reader.read_regex(_unquoted_key)
+ return key
+
+
+def parse_unquoted_value(reader: Reader) -> str:
+ (part,) = reader.read_regex(_unquoted_value)
+ return re.sub(r"\s+#.*", "", part).rstrip()
+
+
+def parse_value(reader: Reader) -> str:
+ char = reader.peek(1)
+ if char == u"'":
+ (value,) = reader.read_regex(_single_quoted_value)
+ return decode_escapes(_single_quote_escapes, value)
+ elif char == u'"':
+ (value,) = reader.read_regex(_double_quoted_value)
+ return decode_escapes(_double_quote_escapes, value)
+ elif char in (u"", u"\n", u"\r"):
+ return u""
+ else:
+ return parse_unquoted_value(reader)
+
+
+def parse_binding(reader: Reader) -> Binding:
+ reader.set_mark()
+ try:
+ reader.read_regex(_multiline_whitespace)
+ if not reader.has_next():
+ return Binding(
+ key=None,
+ value=None,
+ original=reader.get_marked(),
+ error=False,
+ )
+ reader.read_regex(_export)
+ key = parse_key(reader)
+ reader.read_regex(_whitespace)
+ if reader.peek(1) == "=":
+ reader.read_regex(_equal_sign)
+ value: Optional[str] = parse_value(reader)
+ else:
+ value = None
+ reader.read_regex(_comment)
+ reader.read_regex(_end_of_line)
+ return Binding(
+ key=key,
+ value=value,
+ original=reader.get_marked(),
+ error=False,
+ )
+ except Error:
+ reader.read_regex(_rest_of_line)
+ return Binding(
+ key=None,
+ value=None,
+ original=reader.get_marked(),
+ error=True,
+ )
+
+
+def parse_stream(stream: IO[str]) -> Iterator[Binding]:
+ reader = Reader(stream)
+ while reader.has_next():
+ yield parse_binding(reader)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/py.typed
new file mode 100644
index 0000000..7632ecf
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/py.typed
@@ -0,0 +1 @@
+# Marker file for PEP 561
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/variables.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/variables.py
new file mode 100644
index 0000000..667f2f2
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/variables.py
@@ -0,0 +1,86 @@
+import re
+from abc import ABCMeta, abstractmethod
+from typing import Iterator, Mapping, Optional, Pattern
+
+_posix_variable: Pattern[str] = re.compile(
+ r"""
+ \$\{
+ (?P[^\}:]*)
+ (?::-
+ (?P[^\}]*)
+ )?
+ \}
+ """,
+ re.VERBOSE,
+)
+
+
+class Atom(metaclass=ABCMeta):
+ def __ne__(self, other: object) -> bool:
+ result = self.__eq__(other)
+ if result is NotImplemented:
+ return NotImplemented
+ return not result
+
+ @abstractmethod
+ def resolve(self, env: Mapping[str, Optional[str]]) -> str: ...
+
+
+class Literal(Atom):
+ def __init__(self, value: str) -> None:
+ self.value = value
+
+ def __repr__(self) -> str:
+ return f"Literal(value={self.value})"
+
+ def __eq__(self, other: object) -> bool:
+ if not isinstance(other, self.__class__):
+ return NotImplemented
+ return self.value == other.value
+
+ def __hash__(self) -> int:
+ return hash((self.__class__, self.value))
+
+ def resolve(self, env: Mapping[str, Optional[str]]) -> str:
+ return self.value
+
+
+class Variable(Atom):
+ def __init__(self, name: str, default: Optional[str]) -> None:
+ self.name = name
+ self.default = default
+
+ def __repr__(self) -> str:
+ return f"Variable(name={self.name}, default={self.default})"
+
+ def __eq__(self, other: object) -> bool:
+ if not isinstance(other, self.__class__):
+ return NotImplemented
+ return (self.name, self.default) == (other.name, other.default)
+
+ def __hash__(self) -> int:
+ return hash((self.__class__, self.name, self.default))
+
+ def resolve(self, env: Mapping[str, Optional[str]]) -> str:
+ default = self.default if self.default is not None else ""
+ result = env.get(self.name, default)
+ return result if result is not None else ""
+
+
+def parse_variables(value: str) -> Iterator[Atom]:
+ cursor = 0
+
+ for match in _posix_variable.finditer(value):
+ (start, end) = match.span()
+ name = match["name"]
+ default = match["default"]
+
+ if start > cursor:
+ yield Literal(value=value[cursor:start])
+
+ yield Variable(name=name, default=default)
+ cursor = end
+
+ length = len(value)
+ if cursor < length:
+ yield Literal(value=value[cursor:length])
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/dotenv/version.py b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/version.py
new file mode 100644
index 0000000..5c4105c
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/dotenv/version.py
@@ -0,0 +1 @@
+__version__ = "1.0.1"
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/INSTALLER b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/INSTALLER
new file mode 100644
index 0000000..a1b589e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/LICENSE.txt b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/LICENSE.txt
new file mode 100644
index 0000000..9d227a0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/LICENSE.txt
@@ -0,0 +1,28 @@
+Copyright 2010 Pallets
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/METADATA b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/METADATA
new file mode 100644
index 0000000..5a02107
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/METADATA
@@ -0,0 +1,101 @@
+Metadata-Version: 2.1
+Name: Flask
+Version: 3.0.3
+Summary: A simple framework for building complex web applications.
+Maintainer-email: Pallets
+Requires-Python: >=3.8
+Description-Content-Type: text/markdown
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Environment :: Web Environment
+Classifier: Framework :: Flask
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
+Classifier: Topic :: Internet :: WWW/HTTP :: WSGI
+Classifier: Topic :: Internet :: WWW/HTTP :: WSGI :: Application
+Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
+Classifier: Typing :: Typed
+Requires-Dist: Werkzeug>=3.0.0
+Requires-Dist: Jinja2>=3.1.2
+Requires-Dist: itsdangerous>=2.1.2
+Requires-Dist: click>=8.1.3
+Requires-Dist: blinker>=1.6.2
+Requires-Dist: importlib-metadata>=3.6.0; python_version < '3.10'
+Requires-Dist: asgiref>=3.2 ; extra == "async"
+Requires-Dist: python-dotenv ; extra == "dotenv"
+Project-URL: Changes, https://flask.palletsprojects.com/changes/
+Project-URL: Chat, https://discord.gg/pallets
+Project-URL: Documentation, https://flask.palletsprojects.com/
+Project-URL: Donate, https://palletsprojects.com/donate
+Project-URL: Source, https://github.com/pallets/flask/
+Provides-Extra: async
+Provides-Extra: dotenv
+
+# Flask
+
+Flask is a lightweight [WSGI][] web application framework. It is designed
+to make getting started quick and easy, with the ability to scale up to
+complex applications. It began as a simple wrapper around [Werkzeug][]
+and [Jinja][], and has become one of the most popular Python web
+application frameworks.
+
+Flask offers suggestions, but doesn't enforce any dependencies or
+project layout. It is up to the developer to choose the tools and
+libraries they want to use. There are many extensions provided by the
+community that make adding new functionality easy.
+
+[WSGI]: https://wsgi.readthedocs.io/
+[Werkzeug]: https://werkzeug.palletsprojects.com/
+[Jinja]: https://jinja.palletsprojects.com/
+
+
+## Installing
+
+Install and update from [PyPI][] using an installer such as [pip][]:
+
+```
+$ pip install -U Flask
+```
+
+[PyPI]: https://pypi.org/project/Flask/
+[pip]: https://pip.pypa.io/en/stable/getting-started/
+
+
+## A Simple Example
+
+```python
+# save this as app.py
+from flask import Flask
+
+app = Flask(__name__)
+
+@app.route("/")
+def hello():
+ return "Hello, World!"
+```
+
+```
+$ flask run
+ * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
+```
+
+
+## Contributing
+
+For guidance on setting up a development environment and how to make a
+contribution to Flask, see the [contributing guidelines][].
+
+[contributing guidelines]: https://github.com/pallets/flask/blob/main/CONTRIBUTING.rst
+
+
+## Donate
+
+The Pallets organization develops and supports Flask and the libraries
+it uses. In order to grow the community of contributors and users, and
+allow the maintainers to devote more time to the projects, [please
+donate today][].
+
+[please donate today]: https://palletsprojects.com/donate
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/RECORD
new file mode 100644
index 0000000..e63f0ec
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/RECORD
@@ -0,0 +1,58 @@
+../../../bin/flask,sha256=mHXq8KoRL782g9oOoISQjtC-2zzIXRB86dkc9jIYTaY,227
+flask-3.0.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+flask-3.0.3.dist-info/LICENSE.txt,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475
+flask-3.0.3.dist-info/METADATA,sha256=exPahy4aahjV-mYqd9qb5HNP8haB_IxTuaotoSvCtag,3177
+flask-3.0.3.dist-info/RECORD,,
+flask-3.0.3.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+flask-3.0.3.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81
+flask-3.0.3.dist-info/entry_points.txt,sha256=bBP7hTOS5fz9zLtC7sPofBZAlMkEvBxu7KqS6l5lvc4,40
+flask/__init__.py,sha256=6xMqdVA0FIQ2U1KVaGX3lzNCdXPzoHPaa0hvQCNcfSk,2625
+flask/__main__.py,sha256=bYt9eEaoRQWdejEHFD8REx9jxVEdZptECFsV7F49Ink,30
+flask/__pycache__/__init__.cpython-312.pyc,,
+flask/__pycache__/__main__.cpython-312.pyc,,
+flask/__pycache__/app.cpython-312.pyc,,
+flask/__pycache__/blueprints.cpython-312.pyc,,
+flask/__pycache__/cli.cpython-312.pyc,,
+flask/__pycache__/config.cpython-312.pyc,,
+flask/__pycache__/ctx.cpython-312.pyc,,
+flask/__pycache__/debughelpers.cpython-312.pyc,,
+flask/__pycache__/globals.cpython-312.pyc,,
+flask/__pycache__/helpers.cpython-312.pyc,,
+flask/__pycache__/logging.cpython-312.pyc,,
+flask/__pycache__/sessions.cpython-312.pyc,,
+flask/__pycache__/signals.cpython-312.pyc,,
+flask/__pycache__/templating.cpython-312.pyc,,
+flask/__pycache__/testing.cpython-312.pyc,,
+flask/__pycache__/typing.cpython-312.pyc,,
+flask/__pycache__/views.cpython-312.pyc,,
+flask/__pycache__/wrappers.cpython-312.pyc,,
+flask/app.py,sha256=7-lh6cIj27riTE1Q18Ok1p5nOZ8qYiMux4Btc6o6mNc,60143
+flask/blueprints.py,sha256=7INXPwTkUxfOQXOOv1yu52NpHPmPGI5fMTMFZ-BG9yY,4430
+flask/cli.py,sha256=OOaf_Efqih1i2in58j-5ZZZmQnPpaSfiUFbEjlL9bzw,35825
+flask/config.py,sha256=bLzLVAj-cq-Xotu9erqOFte0xSFaVXyfz0AkP4GbwmY,13312
+flask/ctx.py,sha256=4atDhJJ_cpV1VMq4qsfU4E_61M1oN93jlS2H9gjrl58,15120
+flask/debughelpers.py,sha256=PGIDhStW_efRjpaa3zHIpo-htStJOR41Ip3OJWPYBwo,6080
+flask/globals.py,sha256=XdQZmStBmPIs8t93tjx6pO7Bm3gobAaONWkFcUHaGas,1713
+flask/helpers.py,sha256=tYrcQ_73GuSZVEgwFr-eMmV69UriFQDBmt8wZJIAqvg,23084
+flask/json/__init__.py,sha256=hLNR898paqoefdeAhraa5wyJy-bmRB2k2dV4EgVy2Z8,5602
+flask/json/__pycache__/__init__.cpython-312.pyc,,
+flask/json/__pycache__/provider.cpython-312.pyc,,
+flask/json/__pycache__/tag.cpython-312.pyc,,
+flask/json/provider.py,sha256=q6iB83lSiopy80DZPrU-9mGcWwrD0mvLjiv9fHrRZgc,7646
+flask/json/tag.py,sha256=DhaNwuIOhdt2R74oOC9Y4Z8ZprxFYiRb5dUP5byyINw,9281
+flask/logging.py,sha256=8sM3WMTubi1cBb2c_lPkWpN0J8dMAqrgKRYLLi1dCVI,2377
+flask/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+flask/sansio/README.md,sha256=-0X1tECnilmz1cogx-YhNw5d7guK7GKrq_DEV2OzlU0,228
+flask/sansio/__pycache__/app.cpython-312.pyc,,
+flask/sansio/__pycache__/blueprints.cpython-312.pyc,,
+flask/sansio/__pycache__/scaffold.cpython-312.pyc,,
+flask/sansio/app.py,sha256=YG5Gf7JVf1c0yccWDZ86q5VSfJUidOVp27HFxFNxC7U,38053
+flask/sansio/blueprints.py,sha256=Tqe-7EkZ-tbWchm8iDoCfD848f0_3nLv6NNjeIPvHwM,24637
+flask/sansio/scaffold.py,sha256=WLV9TRQMMhGlXz-1OKtQ3lv6mtIBQZxdW2HezYrGxoI,30633
+flask/sessions.py,sha256=RU4lzm9MQW9CtH8rVLRTDm8USMJyT4LbvYe7sxM2__k,14807
+flask/signals.py,sha256=V7lMUww7CqgJ2ThUBn1PiatZtQanOyt7OZpu2GZI-34,750
+flask/templating.py,sha256=2TcXLT85Asflm2W9WOSFxKCmYn5e49w_Jkg9-NaaJWo,7537
+flask/testing.py,sha256=3BFXb3bP7R5r-XLBuobhczbxDu8-1LWRzYuhbr-lwaE,10163
+flask/typing.py,sha256=ZavK-wV28Yv8CQB7u73qZp_jLalpbWdrXS37QR1ftN0,3190
+flask/views.py,sha256=B66bTvYBBcHMYk4dA1ScZD0oTRTBl0I5smp1lRm9riI,6939
+flask/wrappers.py,sha256=m1j5tIJxIu8_sPPgTAB_G4TTh52Q-HoDuw_qHV5J59g,5831
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/REQUESTED b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/REQUESTED
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/WHEEL
new file mode 100644
index 0000000..3b5e64b
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: flit 3.9.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/entry_points.txt b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/entry_points.txt
new file mode 100644
index 0000000..eec6733
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask-3.0.3.dist-info/entry_points.txt
@@ -0,0 +1,3 @@
+[console_scripts]
+flask=flask.cli:main
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/__init__.py
new file mode 100644
index 0000000..e86eb43
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/__init__.py
@@ -0,0 +1,60 @@
+from __future__ import annotations
+
+import typing as t
+
+from . import json as json
+from .app import Flask as Flask
+from .blueprints import Blueprint as Blueprint
+from .config import Config as Config
+from .ctx import after_this_request as after_this_request
+from .ctx import copy_current_request_context as copy_current_request_context
+from .ctx import has_app_context as has_app_context
+from .ctx import has_request_context as has_request_context
+from .globals import current_app as current_app
+from .globals import g as g
+from .globals import request as request
+from .globals import session as session
+from .helpers import abort as abort
+from .helpers import flash as flash
+from .helpers import get_flashed_messages as get_flashed_messages
+from .helpers import get_template_attribute as get_template_attribute
+from .helpers import make_response as make_response
+from .helpers import redirect as redirect
+from .helpers import send_file as send_file
+from .helpers import send_from_directory as send_from_directory
+from .helpers import stream_with_context as stream_with_context
+from .helpers import url_for as url_for
+from .json import jsonify as jsonify
+from .signals import appcontext_popped as appcontext_popped
+from .signals import appcontext_pushed as appcontext_pushed
+from .signals import appcontext_tearing_down as appcontext_tearing_down
+from .signals import before_render_template as before_render_template
+from .signals import got_request_exception as got_request_exception
+from .signals import message_flashed as message_flashed
+from .signals import request_finished as request_finished
+from .signals import request_started as request_started
+from .signals import request_tearing_down as request_tearing_down
+from .signals import template_rendered as template_rendered
+from .templating import render_template as render_template
+from .templating import render_template_string as render_template_string
+from .templating import stream_template as stream_template
+from .templating import stream_template_string as stream_template_string
+from .wrappers import Request as Request
+from .wrappers import Response as Response
+
+
+def __getattr__(name: str) -> t.Any:
+ if name == "__version__":
+ import importlib.metadata
+ import warnings
+
+ warnings.warn(
+ "The '__version__' attribute is deprecated and will be removed in"
+ " Flask 3.1. Use feature detection or"
+ " 'importlib.metadata.version(\"flask\")' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return importlib.metadata.version("flask")
+
+ raise AttributeError(name)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__main__.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/__main__.py
new file mode 100644
index 0000000..4e28416
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/__main__.py
@@ -0,0 +1,3 @@
+from .cli import main
+
+main()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..0c86d2a
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__main__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__main__.cpython-312.pyc
new file mode 100644
index 0000000..8b60336
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/__main__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/app.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/app.cpython-312.pyc
new file mode 100644
index 0000000..79a3850
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/app.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/blueprints.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/blueprints.cpython-312.pyc
new file mode 100644
index 0000000..de07bbc
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/blueprints.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/cli.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/cli.cpython-312.pyc
new file mode 100644
index 0000000..c8e7217
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/cli.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/config.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/config.cpython-312.pyc
new file mode 100644
index 0000000..bd723f5
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/config.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/ctx.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/ctx.cpython-312.pyc
new file mode 100644
index 0000000..ff38cc7
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/ctx.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/debughelpers.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/debughelpers.cpython-312.pyc
new file mode 100644
index 0000000..416e672
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/debughelpers.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/globals.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/globals.cpython-312.pyc
new file mode 100644
index 0000000..c9ee6bf
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/globals.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/helpers.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/helpers.cpython-312.pyc
new file mode 100644
index 0000000..d1df426
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/helpers.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/logging.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/logging.cpython-312.pyc
new file mode 100644
index 0000000..e3b58a4
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/logging.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/sessions.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/sessions.cpython-312.pyc
new file mode 100644
index 0000000..dff1c76
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/sessions.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/signals.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/signals.cpython-312.pyc
new file mode 100644
index 0000000..9d4bca6
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/signals.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/templating.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/templating.cpython-312.pyc
new file mode 100644
index 0000000..4927f40
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/templating.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/testing.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/testing.cpython-312.pyc
new file mode 100644
index 0000000..5012fd9
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/testing.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/typing.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/typing.cpython-312.pyc
new file mode 100644
index 0000000..4469c76
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/typing.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/views.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/views.cpython-312.pyc
new file mode 100644
index 0000000..98253ca
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/views.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/wrappers.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/wrappers.cpython-312.pyc
new file mode 100644
index 0000000..12d25b3
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/__pycache__/wrappers.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/app.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/app.py
new file mode 100644
index 0000000..7622b5e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/app.py
@@ -0,0 +1,1498 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import os
+import sys
+import typing as t
+import weakref
+from datetime import timedelta
+from inspect import iscoroutinefunction
+from itertools import chain
+from types import TracebackType
+from urllib.parse import quote as _url_quote
+
+import click
+from werkzeug.datastructures import Headers
+from werkzeug.datastructures import ImmutableDict
+from werkzeug.exceptions import BadRequestKeyError
+from werkzeug.exceptions import HTTPException
+from werkzeug.exceptions import InternalServerError
+from werkzeug.routing import BuildError
+from werkzeug.routing import MapAdapter
+from werkzeug.routing import RequestRedirect
+from werkzeug.routing import RoutingException
+from werkzeug.routing import Rule
+from werkzeug.serving import is_running_from_reloader
+from werkzeug.wrappers import Response as BaseResponse
+
+from . import cli
+from . import typing as ft
+from .ctx import AppContext
+from .ctx import RequestContext
+from .globals import _cv_app
+from .globals import _cv_request
+from .globals import current_app
+from .globals import g
+from .globals import request
+from .globals import request_ctx
+from .globals import session
+from .helpers import get_debug_flag
+from .helpers import get_flashed_messages
+from .helpers import get_load_dotenv
+from .helpers import send_from_directory
+from .sansio.app import App
+from .sansio.scaffold import _sentinel
+from .sessions import SecureCookieSessionInterface
+from .sessions import SessionInterface
+from .signals import appcontext_tearing_down
+from .signals import got_request_exception
+from .signals import request_finished
+from .signals import request_started
+from .signals import request_tearing_down
+from .templating import Environment
+from .wrappers import Request
+from .wrappers import Response
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from _typeshed.wsgi import StartResponse
+ from _typeshed.wsgi import WSGIEnvironment
+
+ from .testing import FlaskClient
+ from .testing import FlaskCliRunner
+
+T_shell_context_processor = t.TypeVar(
+ "T_shell_context_processor", bound=ft.ShellContextProcessorCallable
+)
+T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable)
+T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable)
+T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable)
+T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable)
+
+
+def _make_timedelta(value: timedelta | int | None) -> timedelta | None:
+ if value is None or isinstance(value, timedelta):
+ return value
+
+ return timedelta(seconds=value)
+
+
+class Flask(App):
+ """The flask object implements a WSGI application and acts as the central
+ object. It is passed the name of the module or package of the
+ application. Once it is created it will act as a central registry for
+ the view functions, the URL rules, template configuration and much more.
+
+ The name of the package is used to resolve resources from inside the
+ package or the folder the module is contained in depending on if the
+ package parameter resolves to an actual python package (a folder with
+ an :file:`__init__.py` file inside) or a standard module (just a ``.py`` file).
+
+ For more information about resource loading, see :func:`open_resource`.
+
+ Usually you create a :class:`Flask` instance in your main module or
+ in the :file:`__init__.py` file of your package like this::
+
+ from flask import Flask
+ app = Flask(__name__)
+
+ .. admonition:: About the First Parameter
+
+ The idea of the first parameter is to give Flask an idea of what
+ belongs to your application. This name is used to find resources
+ on the filesystem, can be used by extensions to improve debugging
+ information and a lot more.
+
+ So it's important what you provide there. If you are using a single
+ module, `__name__` is always the correct value. If you however are
+ using a package, it's usually recommended to hardcode the name of
+ your package there.
+
+ For example if your application is defined in :file:`yourapplication/app.py`
+ you should create it with one of the two versions below::
+
+ app = Flask('yourapplication')
+ app = Flask(__name__.split('.')[0])
+
+ Why is that? The application will work even with `__name__`, thanks
+ to how resources are looked up. However it will make debugging more
+ painful. Certain extensions can make assumptions based on the
+ import name of your application. For example the Flask-SQLAlchemy
+ extension will look for the code in your application that triggered
+ an SQL query in debug mode. If the import name is not properly set
+ up, that debugging information is lost. (For example it would only
+ pick up SQL queries in `yourapplication.app` and not
+ `yourapplication.views.frontend`)
+
+ .. versionadded:: 0.7
+ The `static_url_path`, `static_folder`, and `template_folder`
+ parameters were added.
+
+ .. versionadded:: 0.8
+ The `instance_path` and `instance_relative_config` parameters were
+ added.
+
+ .. versionadded:: 0.11
+ The `root_path` parameter was added.
+
+ .. versionadded:: 1.0
+ The ``host_matching`` and ``static_host`` parameters were added.
+
+ .. versionadded:: 1.0
+ The ``subdomain_matching`` parameter was added. Subdomain
+ matching needs to be enabled manually now. Setting
+ :data:`SERVER_NAME` does not implicitly enable it.
+
+ :param import_name: the name of the application package
+ :param static_url_path: can be used to specify a different path for the
+ static files on the web. Defaults to the name
+ of the `static_folder` folder.
+ :param static_folder: The folder with static files that is served at
+ ``static_url_path``. Relative to the application ``root_path``
+ or an absolute path. Defaults to ``'static'``.
+ :param static_host: the host to use when adding the static route.
+ Defaults to None. Required when using ``host_matching=True``
+ with a ``static_folder`` configured.
+ :param host_matching: set ``url_map.host_matching`` attribute.
+ Defaults to False.
+ :param subdomain_matching: consider the subdomain relative to
+ :data:`SERVER_NAME` when matching routes. Defaults to False.
+ :param template_folder: the folder that contains the templates that should
+ be used by the application. Defaults to
+ ``'templates'`` folder in the root path of the
+ application.
+ :param instance_path: An alternative instance path for the application.
+ By default the folder ``'instance'`` next to the
+ package or module is assumed to be the instance
+ path.
+ :param instance_relative_config: if set to ``True`` relative filenames
+ for loading the config are assumed to
+ be relative to the instance path instead
+ of the application root.
+ :param root_path: The path to the root of the application files.
+ This should only be set manually when it can't be detected
+ automatically, such as for namespace packages.
+ """
+
+ default_config = ImmutableDict(
+ {
+ "DEBUG": None,
+ "TESTING": False,
+ "PROPAGATE_EXCEPTIONS": None,
+ "SECRET_KEY": None,
+ "PERMANENT_SESSION_LIFETIME": timedelta(days=31),
+ "USE_X_SENDFILE": False,
+ "SERVER_NAME": None,
+ "APPLICATION_ROOT": "/",
+ "SESSION_COOKIE_NAME": "session",
+ "SESSION_COOKIE_DOMAIN": None,
+ "SESSION_COOKIE_PATH": None,
+ "SESSION_COOKIE_HTTPONLY": True,
+ "SESSION_COOKIE_SECURE": False,
+ "SESSION_COOKIE_SAMESITE": None,
+ "SESSION_REFRESH_EACH_REQUEST": True,
+ "MAX_CONTENT_LENGTH": None,
+ "SEND_FILE_MAX_AGE_DEFAULT": None,
+ "TRAP_BAD_REQUEST_ERRORS": None,
+ "TRAP_HTTP_EXCEPTIONS": False,
+ "EXPLAIN_TEMPLATE_LOADING": False,
+ "PREFERRED_URL_SCHEME": "http",
+ "TEMPLATES_AUTO_RELOAD": None,
+ "MAX_COOKIE_SIZE": 4093,
+ }
+ )
+
+ #: The class that is used for request objects. See :class:`~flask.Request`
+ #: for more information.
+ request_class: type[Request] = Request
+
+ #: The class that is used for response objects. See
+ #: :class:`~flask.Response` for more information.
+ response_class: type[Response] = Response
+
+ #: the session interface to use. By default an instance of
+ #: :class:`~flask.sessions.SecureCookieSessionInterface` is used here.
+ #:
+ #: .. versionadded:: 0.8
+ session_interface: SessionInterface = SecureCookieSessionInterface()
+
+ def __init__(
+ self,
+ import_name: str,
+ static_url_path: str | None = None,
+ static_folder: str | os.PathLike[str] | None = "static",
+ static_host: str | None = None,
+ host_matching: bool = False,
+ subdomain_matching: bool = False,
+ template_folder: str | os.PathLike[str] | None = "templates",
+ instance_path: str | None = None,
+ instance_relative_config: bool = False,
+ root_path: str | None = None,
+ ):
+ super().__init__(
+ import_name=import_name,
+ static_url_path=static_url_path,
+ static_folder=static_folder,
+ static_host=static_host,
+ host_matching=host_matching,
+ subdomain_matching=subdomain_matching,
+ template_folder=template_folder,
+ instance_path=instance_path,
+ instance_relative_config=instance_relative_config,
+ root_path=root_path,
+ )
+
+ #: The Click command group for registering CLI commands for this
+ #: object. The commands are available from the ``flask`` command
+ #: once the application has been discovered and blueprints have
+ #: been registered.
+ self.cli = cli.AppGroup()
+
+ # Set the name of the Click group in case someone wants to add
+ # the app's commands to another CLI tool.
+ self.cli.name = self.name
+
+ # Add a static route using the provided static_url_path, static_host,
+ # and static_folder if there is a configured static_folder.
+ # Note we do this without checking if static_folder exists.
+ # For one, it might be created while the server is running (e.g. during
+ # development). Also, Google App Engine stores static files somewhere
+ if self.has_static_folder:
+ assert (
+ bool(static_host) == host_matching
+ ), "Invalid static_host/host_matching combination"
+ # Use a weakref to avoid creating a reference cycle between the app
+ # and the view function (see #3761).
+ self_ref = weakref.ref(self)
+ self.add_url_rule(
+ f"{self.static_url_path}/",
+ endpoint="static",
+ host=static_host,
+ view_func=lambda **kw: self_ref().send_static_file(**kw), # type: ignore # noqa: B950
+ )
+
+ def get_send_file_max_age(self, filename: str | None) -> int | None:
+ """Used by :func:`send_file` to determine the ``max_age`` cache
+ value for a given file path if it wasn't passed.
+
+ By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from
+ the configuration of :data:`~flask.current_app`. This defaults
+ to ``None``, which tells the browser to use conditional requests
+ instead of a timed cache, which is usually preferable.
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ .. versionchanged:: 2.0
+ The default configuration is ``None`` instead of 12 hours.
+
+ .. versionadded:: 0.9
+ """
+ value = current_app.config["SEND_FILE_MAX_AGE_DEFAULT"]
+
+ if value is None:
+ return None
+
+ if isinstance(value, timedelta):
+ return int(value.total_seconds())
+
+ return value # type: ignore[no-any-return]
+
+ def send_static_file(self, filename: str) -> Response:
+ """The view function used to serve files from
+ :attr:`static_folder`. A route is automatically registered for
+ this view at :attr:`static_url_path` if :attr:`static_folder` is
+ set.
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ .. versionadded:: 0.5
+
+ """
+ if not self.has_static_folder:
+ raise RuntimeError("'static_folder' must be set to serve static_files.")
+
+ # send_file only knows to call get_send_file_max_age on the app,
+ # call it here so it works for blueprints too.
+ max_age = self.get_send_file_max_age(filename)
+ return send_from_directory(
+ t.cast(str, self.static_folder), filename, max_age=max_age
+ )
+
+ def open_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]:
+ """Open a resource file relative to :attr:`root_path` for
+ reading.
+
+ For example, if the file ``schema.sql`` is next to the file
+ ``app.py`` where the ``Flask`` app is defined, it can be opened
+ with:
+
+ .. code-block:: python
+
+ with app.open_resource("schema.sql") as f:
+ conn.executescript(f.read())
+
+ :param resource: Path to the resource relative to
+ :attr:`root_path`.
+ :param mode: Open the file in this mode. Only reading is
+ supported, valid values are "r" (or "rt") and "rb".
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ """
+ if mode not in {"r", "rt", "rb"}:
+ raise ValueError("Resources can only be opened for reading.")
+
+ return open(os.path.join(self.root_path, resource), mode)
+
+ def open_instance_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]:
+ """Opens a resource from the application's instance folder
+ (:attr:`instance_path`). Otherwise works like
+ :meth:`open_resource`. Instance resources can also be opened for
+ writing.
+
+ :param resource: the name of the resource. To access resources within
+ subfolders use forward slashes as separator.
+ :param mode: resource file opening mode, default is 'rb'.
+ """
+ return open(os.path.join(self.instance_path, resource), mode)
+
+ def create_jinja_environment(self) -> Environment:
+ """Create the Jinja environment based on :attr:`jinja_options`
+ and the various Jinja-related methods of the app. Changing
+ :attr:`jinja_options` after this will have no effect. Also adds
+ Flask-related globals and filters to the environment.
+
+ .. versionchanged:: 0.11
+ ``Environment.auto_reload`` set in accordance with
+ ``TEMPLATES_AUTO_RELOAD`` configuration option.
+
+ .. versionadded:: 0.5
+ """
+ options = dict(self.jinja_options)
+
+ if "autoescape" not in options:
+ options["autoescape"] = self.select_jinja_autoescape
+
+ if "auto_reload" not in options:
+ auto_reload = self.config["TEMPLATES_AUTO_RELOAD"]
+
+ if auto_reload is None:
+ auto_reload = self.debug
+
+ options["auto_reload"] = auto_reload
+
+ rv = self.jinja_environment(self, **options)
+ rv.globals.update(
+ url_for=self.url_for,
+ get_flashed_messages=get_flashed_messages,
+ config=self.config,
+ # request, session and g are normally added with the
+ # context processor for efficiency reasons but for imported
+ # templates we also want the proxies in there.
+ request=request,
+ session=session,
+ g=g,
+ )
+ rv.policies["json.dumps_function"] = self.json.dumps
+ return rv
+
+ def create_url_adapter(self, request: Request | None) -> MapAdapter | None:
+ """Creates a URL adapter for the given request. The URL adapter
+ is created at a point where the request context is not yet set
+ up so the request is passed explicitly.
+
+ .. versionadded:: 0.6
+
+ .. versionchanged:: 0.9
+ This can now also be called without a request object when the
+ URL adapter is created for the application context.
+
+ .. versionchanged:: 1.0
+ :data:`SERVER_NAME` no longer implicitly enables subdomain
+ matching. Use :attr:`subdomain_matching` instead.
+ """
+ if request is not None:
+ # If subdomain matching is disabled (the default), use the
+ # default subdomain in all cases. This should be the default
+ # in Werkzeug but it currently does not have that feature.
+ if not self.subdomain_matching:
+ subdomain = self.url_map.default_subdomain or None
+ else:
+ subdomain = None
+
+ return self.url_map.bind_to_environ(
+ request.environ,
+ server_name=self.config["SERVER_NAME"],
+ subdomain=subdomain,
+ )
+ # We need at the very least the server name to be set for this
+ # to work.
+ if self.config["SERVER_NAME"] is not None:
+ return self.url_map.bind(
+ self.config["SERVER_NAME"],
+ script_name=self.config["APPLICATION_ROOT"],
+ url_scheme=self.config["PREFERRED_URL_SCHEME"],
+ )
+
+ return None
+
+ def raise_routing_exception(self, request: Request) -> t.NoReturn:
+ """Intercept routing exceptions and possibly do something else.
+
+ In debug mode, intercept a routing redirect and replace it with
+ an error if the body will be discarded.
+
+ With modern Werkzeug this shouldn't occur, since it now uses a
+ 308 status which tells the browser to resend the method and
+ body.
+
+ .. versionchanged:: 2.1
+ Don't intercept 307 and 308 redirects.
+
+ :meta private:
+ :internal:
+ """
+ if (
+ not self.debug
+ or not isinstance(request.routing_exception, RequestRedirect)
+ or request.routing_exception.code in {307, 308}
+ or request.method in {"GET", "HEAD", "OPTIONS"}
+ ):
+ raise request.routing_exception # type: ignore[misc]
+
+ from .debughelpers import FormDataRoutingRedirect
+
+ raise FormDataRoutingRedirect(request)
+
+ def update_template_context(self, context: dict[str, t.Any]) -> None:
+ """Update the template context with some commonly used variables.
+ This injects request, session, config and g into the template
+ context as well as everything template context processors want
+ to inject. Note that the as of Flask 0.6, the original values
+ in the context will not be overridden if a context processor
+ decides to return a value with the same key.
+
+ :param context: the context as a dictionary that is updated in place
+ to add extra variables.
+ """
+ names: t.Iterable[str | None] = (None,)
+
+ # A template may be rendered outside a request context.
+ if request:
+ names = chain(names, reversed(request.blueprints))
+
+ # The values passed to render_template take precedence. Keep a
+ # copy to re-apply after all context functions.
+ orig_ctx = context.copy()
+
+ for name in names:
+ if name in self.template_context_processors:
+ for func in self.template_context_processors[name]:
+ context.update(self.ensure_sync(func)())
+
+ context.update(orig_ctx)
+
+ def make_shell_context(self) -> dict[str, t.Any]:
+ """Returns the shell context for an interactive shell for this
+ application. This runs all the registered shell context
+ processors.
+
+ .. versionadded:: 0.11
+ """
+ rv = {"app": self, "g": g}
+ for processor in self.shell_context_processors:
+ rv.update(processor())
+ return rv
+
+ def run(
+ self,
+ host: str | None = None,
+ port: int | None = None,
+ debug: bool | None = None,
+ load_dotenv: bool = True,
+ **options: t.Any,
+ ) -> None:
+ """Runs the application on a local development server.
+
+ Do not use ``run()`` in a production setting. It is not intended to
+ meet security and performance requirements for a production server.
+ Instead, see :doc:`/deploying/index` for WSGI server recommendations.
+
+ If the :attr:`debug` flag is set the server will automatically reload
+ for code changes and show a debugger in case an exception happened.
+
+ If you want to run the application in debug mode, but disable the
+ code execution on the interactive debugger, you can pass
+ ``use_evalex=False`` as parameter. This will keep the debugger's
+ traceback screen active, but disable code execution.
+
+ It is not recommended to use this function for development with
+ automatic reloading as this is badly supported. Instead you should
+ be using the :command:`flask` command line script's ``run`` support.
+
+ .. admonition:: Keep in Mind
+
+ Flask will suppress any server error with a generic error page
+ unless it is in debug mode. As such to enable just the
+ interactive debugger without the code reloading, you have to
+ invoke :meth:`run` with ``debug=True`` and ``use_reloader=False``.
+ Setting ``use_debugger`` to ``True`` without being in debug mode
+ won't catch any exceptions because there won't be any to
+ catch.
+
+ :param host: the hostname to listen on. Set this to ``'0.0.0.0'`` to
+ have the server available externally as well. Defaults to
+ ``'127.0.0.1'`` or the host in the ``SERVER_NAME`` config variable
+ if present.
+ :param port: the port of the webserver. Defaults to ``5000`` or the
+ port defined in the ``SERVER_NAME`` config variable if present.
+ :param debug: if given, enable or disable debug mode. See
+ :attr:`debug`.
+ :param load_dotenv: Load the nearest :file:`.env` and :file:`.flaskenv`
+ files to set environment variables. Will also change the working
+ directory to the directory containing the first file found.
+ :param options: the options to be forwarded to the underlying Werkzeug
+ server. See :func:`werkzeug.serving.run_simple` for more
+ information.
+
+ .. versionchanged:: 1.0
+ If installed, python-dotenv will be used to load environment
+ variables from :file:`.env` and :file:`.flaskenv` files.
+
+ The :envvar:`FLASK_DEBUG` environment variable will override :attr:`debug`.
+
+ Threaded mode is enabled by default.
+
+ .. versionchanged:: 0.10
+ The default port is now picked from the ``SERVER_NAME``
+ variable.
+ """
+ # Ignore this call so that it doesn't start another server if
+ # the 'flask run' command is used.
+ if os.environ.get("FLASK_RUN_FROM_CLI") == "true":
+ if not is_running_from_reloader():
+ click.secho(
+ " * Ignoring a call to 'app.run()' that would block"
+ " the current 'flask' CLI command.\n"
+ " Only call 'app.run()' in an 'if __name__ =="
+ ' "__main__"\' guard.',
+ fg="red",
+ )
+
+ return
+
+ if get_load_dotenv(load_dotenv):
+ cli.load_dotenv()
+
+ # if set, env var overrides existing value
+ if "FLASK_DEBUG" in os.environ:
+ self.debug = get_debug_flag()
+
+ # debug passed to method overrides all other sources
+ if debug is not None:
+ self.debug = bool(debug)
+
+ server_name = self.config.get("SERVER_NAME")
+ sn_host = sn_port = None
+
+ if server_name:
+ sn_host, _, sn_port = server_name.partition(":")
+
+ if not host:
+ if sn_host:
+ host = sn_host
+ else:
+ host = "127.0.0.1"
+
+ if port or port == 0:
+ port = int(port)
+ elif sn_port:
+ port = int(sn_port)
+ else:
+ port = 5000
+
+ options.setdefault("use_reloader", self.debug)
+ options.setdefault("use_debugger", self.debug)
+ options.setdefault("threaded", True)
+
+ cli.show_server_banner(self.debug, self.name)
+
+ from werkzeug.serving import run_simple
+
+ try:
+ run_simple(t.cast(str, host), port, self, **options)
+ finally:
+ # reset the first request information if the development server
+ # reset normally. This makes it possible to restart the server
+ # without reloader and that stuff from an interactive shell.
+ self._got_first_request = False
+
+ def test_client(self, use_cookies: bool = True, **kwargs: t.Any) -> FlaskClient:
+ """Creates a test client for this application. For information
+ about unit testing head over to :doc:`/testing`.
+
+ Note that if you are testing for assertions or exceptions in your
+ application code, you must set ``app.testing = True`` in order for the
+ exceptions to propagate to the test client. Otherwise, the exception
+ will be handled by the application (not visible to the test client) and
+ the only indication of an AssertionError or other exception will be a
+ 500 status code response to the test client. See the :attr:`testing`
+ attribute. For example::
+
+ app.testing = True
+ client = app.test_client()
+
+ The test client can be used in a ``with`` block to defer the closing down
+ of the context until the end of the ``with`` block. This is useful if
+ you want to access the context locals for testing::
+
+ with app.test_client() as c:
+ rv = c.get('/?vodka=42')
+ assert request.args['vodka'] == '42'
+
+ Additionally, you may pass optional keyword arguments that will then
+ be passed to the application's :attr:`test_client_class` constructor.
+ For example::
+
+ from flask.testing import FlaskClient
+
+ class CustomClient(FlaskClient):
+ def __init__(self, *args, **kwargs):
+ self._authentication = kwargs.pop("authentication")
+ super(CustomClient,self).__init__( *args, **kwargs)
+
+ app.test_client_class = CustomClient
+ client = app.test_client(authentication='Basic ....')
+
+ See :class:`~flask.testing.FlaskClient` for more information.
+
+ .. versionchanged:: 0.4
+ added support for ``with`` block usage for the client.
+
+ .. versionadded:: 0.7
+ The `use_cookies` parameter was added as well as the ability
+ to override the client to be used by setting the
+ :attr:`test_client_class` attribute.
+
+ .. versionchanged:: 0.11
+ Added `**kwargs` to support passing additional keyword arguments to
+ the constructor of :attr:`test_client_class`.
+ """
+ cls = self.test_client_class
+ if cls is None:
+ from .testing import FlaskClient as cls
+ return cls( # type: ignore
+ self, self.response_class, use_cookies=use_cookies, **kwargs
+ )
+
+ def test_cli_runner(self, **kwargs: t.Any) -> FlaskCliRunner:
+ """Create a CLI runner for testing CLI commands.
+ See :ref:`testing-cli`.
+
+ Returns an instance of :attr:`test_cli_runner_class`, by default
+ :class:`~flask.testing.FlaskCliRunner`. The Flask app object is
+ passed as the first argument.
+
+ .. versionadded:: 1.0
+ """
+ cls = self.test_cli_runner_class
+
+ if cls is None:
+ from .testing import FlaskCliRunner as cls
+
+ return cls(self, **kwargs) # type: ignore
+
+ def handle_http_exception(
+ self, e: HTTPException
+ ) -> HTTPException | ft.ResponseReturnValue:
+ """Handles an HTTP exception. By default this will invoke the
+ registered error handlers and fall back to returning the
+ exception as response.
+
+ .. versionchanged:: 1.0.3
+ ``RoutingException``, used internally for actions such as
+ slash redirects during routing, is not passed to error
+ handlers.
+
+ .. versionchanged:: 1.0
+ Exceptions are looked up by code *and* by MRO, so
+ ``HTTPException`` subclasses can be handled with a catch-all
+ handler for the base ``HTTPException``.
+
+ .. versionadded:: 0.3
+ """
+ # Proxy exceptions don't have error codes. We want to always return
+ # those unchanged as errors
+ if e.code is None:
+ return e
+
+ # RoutingExceptions are used internally to trigger routing
+ # actions, such as slash redirects raising RequestRedirect. They
+ # are not raised or handled in user code.
+ if isinstance(e, RoutingException):
+ return e
+
+ handler = self._find_error_handler(e, request.blueprints)
+ if handler is None:
+ return e
+ return self.ensure_sync(handler)(e) # type: ignore[no-any-return]
+
+ def handle_user_exception(
+ self, e: Exception
+ ) -> HTTPException | ft.ResponseReturnValue:
+ """This method is called whenever an exception occurs that
+ should be handled. A special case is :class:`~werkzeug
+ .exceptions.HTTPException` which is forwarded to the
+ :meth:`handle_http_exception` method. This function will either
+ return a response value or reraise the exception with the same
+ traceback.
+
+ .. versionchanged:: 1.0
+ Key errors raised from request data like ``form`` show the
+ bad key in debug mode rather than a generic bad request
+ message.
+
+ .. versionadded:: 0.7
+ """
+ if isinstance(e, BadRequestKeyError) and (
+ self.debug or self.config["TRAP_BAD_REQUEST_ERRORS"]
+ ):
+ e.show_exception = True
+
+ if isinstance(e, HTTPException) and not self.trap_http_exception(e):
+ return self.handle_http_exception(e)
+
+ handler = self._find_error_handler(e, request.blueprints)
+
+ if handler is None:
+ raise
+
+ return self.ensure_sync(handler)(e) # type: ignore[no-any-return]
+
+ def handle_exception(self, e: Exception) -> Response:
+ """Handle an exception that did not have an error handler
+ associated with it, or that was raised from an error handler.
+ This always causes a 500 ``InternalServerError``.
+
+ Always sends the :data:`got_request_exception` signal.
+
+ If :data:`PROPAGATE_EXCEPTIONS` is ``True``, such as in debug
+ mode, the error will be re-raised so that the debugger can
+ display it. Otherwise, the original exception is logged, and
+ an :exc:`~werkzeug.exceptions.InternalServerError` is returned.
+
+ If an error handler is registered for ``InternalServerError`` or
+ ``500``, it will be used. For consistency, the handler will
+ always receive the ``InternalServerError``. The original
+ unhandled exception is available as ``e.original_exception``.
+
+ .. versionchanged:: 1.1.0
+ Always passes the ``InternalServerError`` instance to the
+ handler, setting ``original_exception`` to the unhandled
+ error.
+
+ .. versionchanged:: 1.1.0
+ ``after_request`` functions and other finalization is done
+ even for the default 500 response when there is no handler.
+
+ .. versionadded:: 0.3
+ """
+ exc_info = sys.exc_info()
+ got_request_exception.send(self, _async_wrapper=self.ensure_sync, exception=e)
+ propagate = self.config["PROPAGATE_EXCEPTIONS"]
+
+ if propagate is None:
+ propagate = self.testing or self.debug
+
+ if propagate:
+ # Re-raise if called with an active exception, otherwise
+ # raise the passed in exception.
+ if exc_info[1] is e:
+ raise
+
+ raise e
+
+ self.log_exception(exc_info)
+ server_error: InternalServerError | ft.ResponseReturnValue
+ server_error = InternalServerError(original_exception=e)
+ handler = self._find_error_handler(server_error, request.blueprints)
+
+ if handler is not None:
+ server_error = self.ensure_sync(handler)(server_error)
+
+ return self.finalize_request(server_error, from_error_handler=True)
+
+ def log_exception(
+ self,
+ exc_info: (tuple[type, BaseException, TracebackType] | tuple[None, None, None]),
+ ) -> None:
+ """Logs an exception. This is called by :meth:`handle_exception`
+ if debugging is disabled and right before the handler is called.
+ The default implementation logs the exception as error on the
+ :attr:`logger`.
+
+ .. versionadded:: 0.8
+ """
+ self.logger.error(
+ f"Exception on {request.path} [{request.method}]", exc_info=exc_info
+ )
+
+ def dispatch_request(self) -> ft.ResponseReturnValue:
+ """Does the request dispatching. Matches the URL and returns the
+ return value of the view or error handler. This does not have to
+ be a response object. In order to convert the return value to a
+ proper response object, call :func:`make_response`.
+
+ .. versionchanged:: 0.7
+ This no longer does the exception handling, this code was
+ moved to the new :meth:`full_dispatch_request`.
+ """
+ req = request_ctx.request
+ if req.routing_exception is not None:
+ self.raise_routing_exception(req)
+ rule: Rule = req.url_rule # type: ignore[assignment]
+ # if we provide automatic options for this URL and the
+ # request came with the OPTIONS method, reply automatically
+ if (
+ getattr(rule, "provide_automatic_options", False)
+ and req.method == "OPTIONS"
+ ):
+ return self.make_default_options_response()
+ # otherwise dispatch to the handler for that endpoint
+ view_args: dict[str, t.Any] = req.view_args # type: ignore[assignment]
+ return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
+
+ def full_dispatch_request(self) -> Response:
+ """Dispatches the request and on top of that performs request
+ pre and postprocessing as well as HTTP exception catching and
+ error handling.
+
+ .. versionadded:: 0.7
+ """
+ self._got_first_request = True
+
+ try:
+ request_started.send(self, _async_wrapper=self.ensure_sync)
+ rv = self.preprocess_request()
+ if rv is None:
+ rv = self.dispatch_request()
+ except Exception as e:
+ rv = self.handle_user_exception(e)
+ return self.finalize_request(rv)
+
+ def finalize_request(
+ self,
+ rv: ft.ResponseReturnValue | HTTPException,
+ from_error_handler: bool = False,
+ ) -> Response:
+ """Given the return value from a view function this finalizes
+ the request by converting it into a response and invoking the
+ postprocessing functions. This is invoked for both normal
+ request dispatching as well as error handlers.
+
+ Because this means that it might be called as a result of a
+ failure a special safe mode is available which can be enabled
+ with the `from_error_handler` flag. If enabled, failures in
+ response processing will be logged and otherwise ignored.
+
+ :internal:
+ """
+ response = self.make_response(rv)
+ try:
+ response = self.process_response(response)
+ request_finished.send(
+ self, _async_wrapper=self.ensure_sync, response=response
+ )
+ except Exception:
+ if not from_error_handler:
+ raise
+ self.logger.exception(
+ "Request finalizing failed with an error while handling an error"
+ )
+ return response
+
+ def make_default_options_response(self) -> Response:
+ """This method is called to create the default ``OPTIONS`` response.
+ This can be changed through subclassing to change the default
+ behavior of ``OPTIONS`` responses.
+
+ .. versionadded:: 0.7
+ """
+ adapter = request_ctx.url_adapter
+ methods = adapter.allowed_methods() # type: ignore[union-attr]
+ rv = self.response_class()
+ rv.allow.update(methods)
+ return rv
+
+ def ensure_sync(self, func: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]:
+ """Ensure that the function is synchronous for WSGI workers.
+ Plain ``def`` functions are returned as-is. ``async def``
+ functions are wrapped to run and wait for the response.
+
+ Override this method to change how the app runs async views.
+
+ .. versionadded:: 2.0
+ """
+ if iscoroutinefunction(func):
+ return self.async_to_sync(func)
+
+ return func
+
+ def async_to_sync(
+ self, func: t.Callable[..., t.Coroutine[t.Any, t.Any, t.Any]]
+ ) -> t.Callable[..., t.Any]:
+ """Return a sync function that will run the coroutine function.
+
+ .. code-block:: python
+
+ result = app.async_to_sync(func)(*args, **kwargs)
+
+ Override this method to change how the app converts async code
+ to be synchronously callable.
+
+ .. versionadded:: 2.0
+ """
+ try:
+ from asgiref.sync import async_to_sync as asgiref_async_to_sync
+ except ImportError:
+ raise RuntimeError(
+ "Install Flask with the 'async' extra in order to use async views."
+ ) from None
+
+ return asgiref_async_to_sync(func)
+
+ def url_for(
+ self,
+ /,
+ endpoint: str,
+ *,
+ _anchor: str | None = None,
+ _method: str | None = None,
+ _scheme: str | None = None,
+ _external: bool | None = None,
+ **values: t.Any,
+ ) -> str:
+ """Generate a URL to the given endpoint with the given values.
+
+ This is called by :func:`flask.url_for`, and can be called
+ directly as well.
+
+ An *endpoint* is the name of a URL rule, usually added with
+ :meth:`@app.route() `, and usually the same name as the
+ view function. A route defined in a :class:`~flask.Blueprint`
+ will prepend the blueprint's name separated by a ``.`` to the
+ endpoint.
+
+ In some cases, such as email messages, you want URLs to include
+ the scheme and domain, like ``https://example.com/hello``. When
+ not in an active request, URLs will be external by default, but
+ this requires setting :data:`SERVER_NAME` so Flask knows what
+ domain to use. :data:`APPLICATION_ROOT` and
+ :data:`PREFERRED_URL_SCHEME` should also be configured as
+ needed. This config is only used when not in an active request.
+
+ Functions can be decorated with :meth:`url_defaults` to modify
+ keyword arguments before the URL is built.
+
+ If building fails for some reason, such as an unknown endpoint
+ or incorrect values, the app's :meth:`handle_url_build_error`
+ method is called. If that returns a string, that is returned,
+ otherwise a :exc:`~werkzeug.routing.BuildError` is raised.
+
+ :param endpoint: The endpoint name associated with the URL to
+ generate. If this starts with a ``.``, the current blueprint
+ name (if any) will be used.
+ :param _anchor: If given, append this as ``#anchor`` to the URL.
+ :param _method: If given, generate the URL associated with this
+ method for the endpoint.
+ :param _scheme: If given, the URL will have this scheme if it
+ is external.
+ :param _external: If given, prefer the URL to be internal
+ (False) or require it to be external (True). External URLs
+ include the scheme and domain. When not in an active
+ request, URLs are external by default.
+ :param values: Values to use for the variable parts of the URL
+ rule. Unknown keys are appended as query string arguments,
+ like ``?a=b&c=d``.
+
+ .. versionadded:: 2.2
+ Moved from ``flask.url_for``, which calls this method.
+ """
+ req_ctx = _cv_request.get(None)
+
+ if req_ctx is not None:
+ url_adapter = req_ctx.url_adapter
+ blueprint_name = req_ctx.request.blueprint
+
+ # If the endpoint starts with "." and the request matches a
+ # blueprint, the endpoint is relative to the blueprint.
+ if endpoint[:1] == ".":
+ if blueprint_name is not None:
+ endpoint = f"{blueprint_name}{endpoint}"
+ else:
+ endpoint = endpoint[1:]
+
+ # When in a request, generate a URL without scheme and
+ # domain by default, unless a scheme is given.
+ if _external is None:
+ _external = _scheme is not None
+ else:
+ app_ctx = _cv_app.get(None)
+
+ # If called by helpers.url_for, an app context is active,
+ # use its url_adapter. Otherwise, app.url_for was called
+ # directly, build an adapter.
+ if app_ctx is not None:
+ url_adapter = app_ctx.url_adapter
+ else:
+ url_adapter = self.create_url_adapter(None)
+
+ if url_adapter is None:
+ raise RuntimeError(
+ "Unable to build URLs outside an active request"
+ " without 'SERVER_NAME' configured. Also configure"
+ " 'APPLICATION_ROOT' and 'PREFERRED_URL_SCHEME' as"
+ " needed."
+ )
+
+ # When outside a request, generate a URL with scheme and
+ # domain by default.
+ if _external is None:
+ _external = True
+
+ # It is an error to set _scheme when _external=False, in order
+ # to avoid accidental insecure URLs.
+ if _scheme is not None and not _external:
+ raise ValueError("When specifying '_scheme', '_external' must be True.")
+
+ self.inject_url_defaults(endpoint, values)
+
+ try:
+ rv = url_adapter.build( # type: ignore[union-attr]
+ endpoint,
+ values,
+ method=_method,
+ url_scheme=_scheme,
+ force_external=_external,
+ )
+ except BuildError as error:
+ values.update(
+ _anchor=_anchor, _method=_method, _scheme=_scheme, _external=_external
+ )
+ return self.handle_url_build_error(error, endpoint, values)
+
+ if _anchor is not None:
+ _anchor = _url_quote(_anchor, safe="%!#$&'()*+,/:;=?@")
+ rv = f"{rv}#{_anchor}"
+
+ return rv
+
+ def make_response(self, rv: ft.ResponseReturnValue) -> Response:
+ """Convert the return value from a view function to an instance of
+ :attr:`response_class`.
+
+ :param rv: the return value from the view function. The view function
+ must return a response. Returning ``None``, or the view ending
+ without returning, is not allowed. The following types are allowed
+ for ``view_rv``:
+
+ ``str``
+ A response object is created with the string encoded to UTF-8
+ as the body.
+
+ ``bytes``
+ A response object is created with the bytes as the body.
+
+ ``dict``
+ A dictionary that will be jsonify'd before being returned.
+
+ ``list``
+ A list that will be jsonify'd before being returned.
+
+ ``generator`` or ``iterator``
+ A generator that returns ``str`` or ``bytes`` to be
+ streamed as the response.
+
+ ``tuple``
+ Either ``(body, status, headers)``, ``(body, status)``, or
+ ``(body, headers)``, where ``body`` is any of the other types
+ allowed here, ``status`` is a string or an integer, and
+ ``headers`` is a dictionary or a list of ``(key, value)``
+ tuples. If ``body`` is a :attr:`response_class` instance,
+ ``status`` overwrites the exiting value and ``headers`` are
+ extended.
+
+ :attr:`response_class`
+ The object is returned unchanged.
+
+ other :class:`~werkzeug.wrappers.Response` class
+ The object is coerced to :attr:`response_class`.
+
+ :func:`callable`
+ The function is called as a WSGI application. The result is
+ used to create a response object.
+
+ .. versionchanged:: 2.2
+ A generator will be converted to a streaming response.
+ A list will be converted to a JSON response.
+
+ .. versionchanged:: 1.1
+ A dict will be converted to a JSON response.
+
+ .. versionchanged:: 0.9
+ Previously a tuple was interpreted as the arguments for the
+ response object.
+ """
+
+ status = headers = None
+
+ # unpack tuple returns
+ if isinstance(rv, tuple):
+ len_rv = len(rv)
+
+ # a 3-tuple is unpacked directly
+ if len_rv == 3:
+ rv, status, headers = rv # type: ignore[misc]
+ # decide if a 2-tuple has status or headers
+ elif len_rv == 2:
+ if isinstance(rv[1], (Headers, dict, tuple, list)):
+ rv, headers = rv
+ else:
+ rv, status = rv # type: ignore[assignment,misc]
+ # other sized tuples are not allowed
+ else:
+ raise TypeError(
+ "The view function did not return a valid response tuple."
+ " The tuple must have the form (body, status, headers),"
+ " (body, status), or (body, headers)."
+ )
+
+ # the body must not be None
+ if rv is None:
+ raise TypeError(
+ f"The view function for {request.endpoint!r} did not"
+ " return a valid response. The function either returned"
+ " None or ended without a return statement."
+ )
+
+ # make sure the body is an instance of the response class
+ if not isinstance(rv, self.response_class):
+ if isinstance(rv, (str, bytes, bytearray)) or isinstance(rv, cabc.Iterator):
+ # let the response class set the status and headers instead of
+ # waiting to do it manually, so that the class can handle any
+ # special logic
+ rv = self.response_class(
+ rv,
+ status=status,
+ headers=headers, # type: ignore[arg-type]
+ )
+ status = headers = None
+ elif isinstance(rv, (dict, list)):
+ rv = self.json.response(rv)
+ elif isinstance(rv, BaseResponse) or callable(rv):
+ # evaluate a WSGI callable, or coerce a different response
+ # class to the correct type
+ try:
+ rv = self.response_class.force_type(
+ rv, # type: ignore[arg-type]
+ request.environ,
+ )
+ except TypeError as e:
+ raise TypeError(
+ f"{e}\nThe view function did not return a valid"
+ " response. The return type must be a string,"
+ " dict, list, tuple with headers or status,"
+ " Response instance, or WSGI callable, but it"
+ f" was a {type(rv).__name__}."
+ ).with_traceback(sys.exc_info()[2]) from None
+ else:
+ raise TypeError(
+ "The view function did not return a valid"
+ " response. The return type must be a string,"
+ " dict, list, tuple with headers or status,"
+ " Response instance, or WSGI callable, but it was a"
+ f" {type(rv).__name__}."
+ )
+
+ rv = t.cast(Response, rv)
+ # prefer the status if it was provided
+ if status is not None:
+ if isinstance(status, (str, bytes, bytearray)):
+ rv.status = status
+ else:
+ rv.status_code = status
+
+ # extend existing headers with provided headers
+ if headers:
+ rv.headers.update(headers) # type: ignore[arg-type]
+
+ return rv
+
+ def preprocess_request(self) -> ft.ResponseReturnValue | None:
+ """Called before the request is dispatched. Calls
+ :attr:`url_value_preprocessors` registered with the app and the
+ current blueprint (if any). Then calls :attr:`before_request_funcs`
+ registered with the app and the blueprint.
+
+ If any :meth:`before_request` handler returns a non-None value, the
+ value is handled as if it was the return value from the view, and
+ further request handling is stopped.
+ """
+ names = (None, *reversed(request.blueprints))
+
+ for name in names:
+ if name in self.url_value_preprocessors:
+ for url_func in self.url_value_preprocessors[name]:
+ url_func(request.endpoint, request.view_args)
+
+ for name in names:
+ if name in self.before_request_funcs:
+ for before_func in self.before_request_funcs[name]:
+ rv = self.ensure_sync(before_func)()
+
+ if rv is not None:
+ return rv # type: ignore[no-any-return]
+
+ return None
+
+ def process_response(self, response: Response) -> Response:
+ """Can be overridden in order to modify the response object
+ before it's sent to the WSGI server. By default this will
+ call all the :meth:`after_request` decorated functions.
+
+ .. versionchanged:: 0.5
+ As of Flask 0.5 the functions registered for after request
+ execution are called in reverse order of registration.
+
+ :param response: a :attr:`response_class` object.
+ :return: a new response object or the same, has to be an
+ instance of :attr:`response_class`.
+ """
+ ctx = request_ctx._get_current_object() # type: ignore[attr-defined]
+
+ for func in ctx._after_request_functions:
+ response = self.ensure_sync(func)(response)
+
+ for name in chain(request.blueprints, (None,)):
+ if name in self.after_request_funcs:
+ for func in reversed(self.after_request_funcs[name]):
+ response = self.ensure_sync(func)(response)
+
+ if not self.session_interface.is_null_session(ctx.session):
+ self.session_interface.save_session(self, ctx.session, response)
+
+ return response
+
+ def do_teardown_request(
+ self,
+ exc: BaseException | None = _sentinel, # type: ignore[assignment]
+ ) -> None:
+ """Called after the request is dispatched and the response is
+ returned, right before the request context is popped.
+
+ This calls all functions decorated with
+ :meth:`teardown_request`, and :meth:`Blueprint.teardown_request`
+ if a blueprint handled the request. Finally, the
+ :data:`request_tearing_down` signal is sent.
+
+ This is called by
+ :meth:`RequestContext.pop() `,
+ which may be delayed during testing to maintain access to
+ resources.
+
+ :param exc: An unhandled exception raised while dispatching the
+ request. Detected from the current exception information if
+ not passed. Passed to each teardown function.
+
+ .. versionchanged:: 0.9
+ Added the ``exc`` argument.
+ """
+ if exc is _sentinel:
+ exc = sys.exc_info()[1]
+
+ for name in chain(request.blueprints, (None,)):
+ if name in self.teardown_request_funcs:
+ for func in reversed(self.teardown_request_funcs[name]):
+ self.ensure_sync(func)(exc)
+
+ request_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc)
+
+ def do_teardown_appcontext(
+ self,
+ exc: BaseException | None = _sentinel, # type: ignore[assignment]
+ ) -> None:
+ """Called right before the application context is popped.
+
+ When handling a request, the application context is popped
+ after the request context. See :meth:`do_teardown_request`.
+
+ This calls all functions decorated with
+ :meth:`teardown_appcontext`. Then the
+ :data:`appcontext_tearing_down` signal is sent.
+
+ This is called by
+ :meth:`AppContext.pop() `.
+
+ .. versionadded:: 0.9
+ """
+ if exc is _sentinel:
+ exc = sys.exc_info()[1]
+
+ for func in reversed(self.teardown_appcontext_funcs):
+ self.ensure_sync(func)(exc)
+
+ appcontext_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc)
+
+ def app_context(self) -> AppContext:
+ """Create an :class:`~flask.ctx.AppContext`. Use as a ``with``
+ block to push the context, which will make :data:`current_app`
+ point at this application.
+
+ An application context is automatically pushed by
+ :meth:`RequestContext.push() `
+ when handling a request, and when running a CLI command. Use
+ this to manually create a context outside of these situations.
+
+ ::
+
+ with app.app_context():
+ init_db()
+
+ See :doc:`/appcontext`.
+
+ .. versionadded:: 0.9
+ """
+ return AppContext(self)
+
+ def request_context(self, environ: WSGIEnvironment) -> RequestContext:
+ """Create a :class:`~flask.ctx.RequestContext` representing a
+ WSGI environment. Use a ``with`` block to push the context,
+ which will make :data:`request` point at this request.
+
+ See :doc:`/reqcontext`.
+
+ Typically you should not call this from your own code. A request
+ context is automatically pushed by the :meth:`wsgi_app` when
+ handling a request. Use :meth:`test_request_context` to create
+ an environment and context instead of this method.
+
+ :param environ: a WSGI environment
+ """
+ return RequestContext(self, environ)
+
+ def test_request_context(self, *args: t.Any, **kwargs: t.Any) -> RequestContext:
+ """Create a :class:`~flask.ctx.RequestContext` for a WSGI
+ environment created from the given values. This is mostly useful
+ during testing, where you may want to run a function that uses
+ request data without dispatching a full request.
+
+ See :doc:`/reqcontext`.
+
+ Use a ``with`` block to push the context, which will make
+ :data:`request` point at the request for the created
+ environment. ::
+
+ with app.test_request_context(...):
+ generate_report()
+
+ When using the shell, it may be easier to push and pop the
+ context manually to avoid indentation. ::
+
+ ctx = app.test_request_context(...)
+ ctx.push()
+ ...
+ ctx.pop()
+
+ Takes the same arguments as Werkzeug's
+ :class:`~werkzeug.test.EnvironBuilder`, with some defaults from
+ the application. See the linked Werkzeug docs for most of the
+ available arguments. Flask-specific behavior is listed here.
+
+ :param path: URL path being requested.
+ :param base_url: Base URL where the app is being served, which
+ ``path`` is relative to. If not given, built from
+ :data:`PREFERRED_URL_SCHEME`, ``subdomain``,
+ :data:`SERVER_NAME`, and :data:`APPLICATION_ROOT`.
+ :param subdomain: Subdomain name to append to
+ :data:`SERVER_NAME`.
+ :param url_scheme: Scheme to use instead of
+ :data:`PREFERRED_URL_SCHEME`.
+ :param data: The request body, either as a string or a dict of
+ form keys and values.
+ :param json: If given, this is serialized as JSON and passed as
+ ``data``. Also defaults ``content_type`` to
+ ``application/json``.
+ :param args: other positional arguments passed to
+ :class:`~werkzeug.test.EnvironBuilder`.
+ :param kwargs: other keyword arguments passed to
+ :class:`~werkzeug.test.EnvironBuilder`.
+ """
+ from .testing import EnvironBuilder
+
+ builder = EnvironBuilder(self, *args, **kwargs)
+
+ try:
+ return self.request_context(builder.get_environ())
+ finally:
+ builder.close()
+
+ def wsgi_app(
+ self, environ: WSGIEnvironment, start_response: StartResponse
+ ) -> cabc.Iterable[bytes]:
+ """The actual WSGI application. This is not implemented in
+ :meth:`__call__` so that middlewares can be applied without
+ losing a reference to the app object. Instead of doing this::
+
+ app = MyMiddleware(app)
+
+ It's a better idea to do this instead::
+
+ app.wsgi_app = MyMiddleware(app.wsgi_app)
+
+ Then you still have the original application object around and
+ can continue to call methods on it.
+
+ .. versionchanged:: 0.7
+ Teardown events for the request and app contexts are called
+ even if an unhandled error occurs. Other events may not be
+ called depending on when an error occurs during dispatch.
+ See :ref:`callbacks-and-errors`.
+
+ :param environ: A WSGI environment.
+ :param start_response: A callable accepting a status code,
+ a list of headers, and an optional exception context to
+ start the response.
+ """
+ ctx = self.request_context(environ)
+ error: BaseException | None = None
+ try:
+ try:
+ ctx.push()
+ response = self.full_dispatch_request()
+ except Exception as e:
+ error = e
+ response = self.handle_exception(e)
+ except: # noqa: B001
+ error = sys.exc_info()[1]
+ raise
+ return response(environ, start_response)
+ finally:
+ if "werkzeug.debug.preserve_context" in environ:
+ environ["werkzeug.debug.preserve_context"](_cv_app.get())
+ environ["werkzeug.debug.preserve_context"](_cv_request.get())
+
+ if error is not None and self.should_ignore_error(error):
+ error = None
+
+ ctx.pop(error)
+
+ def __call__(
+ self, environ: WSGIEnvironment, start_response: StartResponse
+ ) -> cabc.Iterable[bytes]:
+ """The WSGI server calls the Flask application object as the
+ WSGI application. This calls :meth:`wsgi_app`, which can be
+ wrapped to apply middleware.
+ """
+ return self.wsgi_app(environ, start_response)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/blueprints.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/blueprints.py
new file mode 100644
index 0000000..aa9eacf
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/blueprints.py
@@ -0,0 +1,129 @@
+from __future__ import annotations
+
+import os
+import typing as t
+from datetime import timedelta
+
+from .cli import AppGroup
+from .globals import current_app
+from .helpers import send_from_directory
+from .sansio.blueprints import Blueprint as SansioBlueprint
+from .sansio.blueprints import BlueprintSetupState as BlueprintSetupState # noqa
+from .sansio.scaffold import _sentinel
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .wrappers import Response
+
+
+class Blueprint(SansioBlueprint):
+ def __init__(
+ self,
+ name: str,
+ import_name: str,
+ static_folder: str | os.PathLike[str] | None = None,
+ static_url_path: str | None = None,
+ template_folder: str | os.PathLike[str] | None = None,
+ url_prefix: str | None = None,
+ subdomain: str | None = None,
+ url_defaults: dict[str, t.Any] | None = None,
+ root_path: str | None = None,
+ cli_group: str | None = _sentinel, # type: ignore
+ ) -> None:
+ super().__init__(
+ name,
+ import_name,
+ static_folder,
+ static_url_path,
+ template_folder,
+ url_prefix,
+ subdomain,
+ url_defaults,
+ root_path,
+ cli_group,
+ )
+
+ #: The Click command group for registering CLI commands for this
+ #: object. The commands are available from the ``flask`` command
+ #: once the application has been discovered and blueprints have
+ #: been registered.
+ self.cli = AppGroup()
+
+ # Set the name of the Click group in case someone wants to add
+ # the app's commands to another CLI tool.
+ self.cli.name = self.name
+
+ def get_send_file_max_age(self, filename: str | None) -> int | None:
+ """Used by :func:`send_file` to determine the ``max_age`` cache
+ value for a given file path if it wasn't passed.
+
+ By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from
+ the configuration of :data:`~flask.current_app`. This defaults
+ to ``None``, which tells the browser to use conditional requests
+ instead of a timed cache, which is usually preferable.
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ .. versionchanged:: 2.0
+ The default configuration is ``None`` instead of 12 hours.
+
+ .. versionadded:: 0.9
+ """
+ value = current_app.config["SEND_FILE_MAX_AGE_DEFAULT"]
+
+ if value is None:
+ return None
+
+ if isinstance(value, timedelta):
+ return int(value.total_seconds())
+
+ return value # type: ignore[no-any-return]
+
+ def send_static_file(self, filename: str) -> Response:
+ """The view function used to serve files from
+ :attr:`static_folder`. A route is automatically registered for
+ this view at :attr:`static_url_path` if :attr:`static_folder` is
+ set.
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ .. versionadded:: 0.5
+
+ """
+ if not self.has_static_folder:
+ raise RuntimeError("'static_folder' must be set to serve static_files.")
+
+ # send_file only knows to call get_send_file_max_age on the app,
+ # call it here so it works for blueprints too.
+ max_age = self.get_send_file_max_age(filename)
+ return send_from_directory(
+ t.cast(str, self.static_folder), filename, max_age=max_age
+ )
+
+ def open_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]:
+ """Open a resource file relative to :attr:`root_path` for
+ reading.
+
+ For example, if the file ``schema.sql`` is next to the file
+ ``app.py`` where the ``Flask`` app is defined, it can be opened
+ with:
+
+ .. code-block:: python
+
+ with app.open_resource("schema.sql") as f:
+ conn.executescript(f.read())
+
+ :param resource: Path to the resource relative to
+ :attr:`root_path`.
+ :param mode: Open the file in this mode. Only reading is
+ supported, valid values are "r" (or "rt") and "rb".
+
+ Note this is a duplicate of the same method in the Flask
+ class.
+
+ """
+ if mode not in {"r", "rt", "rb"}:
+ raise ValueError("Resources can only be opened for reading.")
+
+ return open(os.path.join(self.root_path, resource), mode)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/cli.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/cli.py
new file mode 100644
index 0000000..ecb292a
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/cli.py
@@ -0,0 +1,1109 @@
+from __future__ import annotations
+
+import ast
+import collections.abc as cabc
+import importlib.metadata
+import inspect
+import os
+import platform
+import re
+import sys
+import traceback
+import typing as t
+from functools import update_wrapper
+from operator import itemgetter
+from types import ModuleType
+
+import click
+from click.core import ParameterSource
+from werkzeug import run_simple
+from werkzeug.serving import is_running_from_reloader
+from werkzeug.utils import import_string
+
+from .globals import current_app
+from .helpers import get_debug_flag
+from .helpers import get_load_dotenv
+
+if t.TYPE_CHECKING:
+ import ssl
+
+ from _typeshed.wsgi import StartResponse
+ from _typeshed.wsgi import WSGIApplication
+ from _typeshed.wsgi import WSGIEnvironment
+
+ from .app import Flask
+
+
+class NoAppException(click.UsageError):
+ """Raised if an application cannot be found or loaded."""
+
+
+def find_best_app(module: ModuleType) -> Flask:
+ """Given a module instance this tries to find the best possible
+ application in the module or raises an exception.
+ """
+ from . import Flask
+
+ # Search for the most common names first.
+ for attr_name in ("app", "application"):
+ app = getattr(module, attr_name, None)
+
+ if isinstance(app, Flask):
+ return app
+
+ # Otherwise find the only object that is a Flask instance.
+ matches = [v for v in module.__dict__.values() if isinstance(v, Flask)]
+
+ if len(matches) == 1:
+ return matches[0]
+ elif len(matches) > 1:
+ raise NoAppException(
+ "Detected multiple Flask applications in module"
+ f" '{module.__name__}'. Use '{module.__name__}:name'"
+ " to specify the correct one."
+ )
+
+ # Search for app factory functions.
+ for attr_name in ("create_app", "make_app"):
+ app_factory = getattr(module, attr_name, None)
+
+ if inspect.isfunction(app_factory):
+ try:
+ app = app_factory()
+
+ if isinstance(app, Flask):
+ return app
+ except TypeError as e:
+ if not _called_with_wrong_args(app_factory):
+ raise
+
+ raise NoAppException(
+ f"Detected factory '{attr_name}' in module '{module.__name__}',"
+ " but could not call it without arguments. Use"
+ f" '{module.__name__}:{attr_name}(args)'"
+ " to specify arguments."
+ ) from e
+
+ raise NoAppException(
+ "Failed to find Flask application or factory in module"
+ f" '{module.__name__}'. Use '{module.__name__}:name'"
+ " to specify one."
+ )
+
+
+def _called_with_wrong_args(f: t.Callable[..., Flask]) -> bool:
+ """Check whether calling a function raised a ``TypeError`` because
+ the call failed or because something in the factory raised the
+ error.
+
+ :param f: The function that was called.
+ :return: ``True`` if the call failed.
+ """
+ tb = sys.exc_info()[2]
+
+ try:
+ while tb is not None:
+ if tb.tb_frame.f_code is f.__code__:
+ # In the function, it was called successfully.
+ return False
+
+ tb = tb.tb_next
+
+ # Didn't reach the function.
+ return True
+ finally:
+ # Delete tb to break a circular reference.
+ # https://docs.python.org/2/library/sys.html#sys.exc_info
+ del tb
+
+
+def find_app_by_string(module: ModuleType, app_name: str) -> Flask:
+ """Check if the given string is a variable name or a function. Call
+ a function to get the app instance, or return the variable directly.
+ """
+ from . import Flask
+
+ # Parse app_name as a single expression to determine if it's a valid
+ # attribute name or function call.
+ try:
+ expr = ast.parse(app_name.strip(), mode="eval").body
+ except SyntaxError:
+ raise NoAppException(
+ f"Failed to parse {app_name!r} as an attribute name or function call."
+ ) from None
+
+ if isinstance(expr, ast.Name):
+ name = expr.id
+ args = []
+ kwargs = {}
+ elif isinstance(expr, ast.Call):
+ # Ensure the function name is an attribute name only.
+ if not isinstance(expr.func, ast.Name):
+ raise NoAppException(
+ f"Function reference must be a simple name: {app_name!r}."
+ )
+
+ name = expr.func.id
+
+ # Parse the positional and keyword arguments as literals.
+ try:
+ args = [ast.literal_eval(arg) for arg in expr.args]
+ kwargs = {
+ kw.arg: ast.literal_eval(kw.value)
+ for kw in expr.keywords
+ if kw.arg is not None
+ }
+ except ValueError:
+ # literal_eval gives cryptic error messages, show a generic
+ # message with the full expression instead.
+ raise NoAppException(
+ f"Failed to parse arguments as literal values: {app_name!r}."
+ ) from None
+ else:
+ raise NoAppException(
+ f"Failed to parse {app_name!r} as an attribute name or function call."
+ )
+
+ try:
+ attr = getattr(module, name)
+ except AttributeError as e:
+ raise NoAppException(
+ f"Failed to find attribute {name!r} in {module.__name__!r}."
+ ) from e
+
+ # If the attribute is a function, call it with any args and kwargs
+ # to get the real application.
+ if inspect.isfunction(attr):
+ try:
+ app = attr(*args, **kwargs)
+ except TypeError as e:
+ if not _called_with_wrong_args(attr):
+ raise
+
+ raise NoAppException(
+ f"The factory {app_name!r} in module"
+ f" {module.__name__!r} could not be called with the"
+ " specified arguments."
+ ) from e
+ else:
+ app = attr
+
+ if isinstance(app, Flask):
+ return app
+
+ raise NoAppException(
+ "A valid Flask application was not obtained from"
+ f" '{module.__name__}:{app_name}'."
+ )
+
+
+def prepare_import(path: str) -> str:
+ """Given a filename this will try to calculate the python path, add it
+ to the search path and return the actual module name that is expected.
+ """
+ path = os.path.realpath(path)
+
+ fname, ext = os.path.splitext(path)
+ if ext == ".py":
+ path = fname
+
+ if os.path.basename(path) == "__init__":
+ path = os.path.dirname(path)
+
+ module_name = []
+
+ # move up until outside package structure (no __init__.py)
+ while True:
+ path, name = os.path.split(path)
+ module_name.append(name)
+
+ if not os.path.exists(os.path.join(path, "__init__.py")):
+ break
+
+ if sys.path[0] != path:
+ sys.path.insert(0, path)
+
+ return ".".join(module_name[::-1])
+
+
+@t.overload
+def locate_app(
+ module_name: str, app_name: str | None, raise_if_not_found: t.Literal[True] = True
+) -> Flask: ...
+
+
+@t.overload
+def locate_app(
+ module_name: str, app_name: str | None, raise_if_not_found: t.Literal[False] = ...
+) -> Flask | None: ...
+
+
+def locate_app(
+ module_name: str, app_name: str | None, raise_if_not_found: bool = True
+) -> Flask | None:
+ try:
+ __import__(module_name)
+ except ImportError:
+ # Reraise the ImportError if it occurred within the imported module.
+ # Determine this by checking whether the trace has a depth > 1.
+ if sys.exc_info()[2].tb_next: # type: ignore[union-attr]
+ raise NoAppException(
+ f"While importing {module_name!r}, an ImportError was"
+ f" raised:\n\n{traceback.format_exc()}"
+ ) from None
+ elif raise_if_not_found:
+ raise NoAppException(f"Could not import {module_name!r}.") from None
+ else:
+ return None
+
+ module = sys.modules[module_name]
+
+ if app_name is None:
+ return find_best_app(module)
+ else:
+ return find_app_by_string(module, app_name)
+
+
+def get_version(ctx: click.Context, param: click.Parameter, value: t.Any) -> None:
+ if not value or ctx.resilient_parsing:
+ return
+
+ flask_version = importlib.metadata.version("flask")
+ werkzeug_version = importlib.metadata.version("werkzeug")
+
+ click.echo(
+ f"Python {platform.python_version()}\n"
+ f"Flask {flask_version}\n"
+ f"Werkzeug {werkzeug_version}",
+ color=ctx.color,
+ )
+ ctx.exit()
+
+
+version_option = click.Option(
+ ["--version"],
+ help="Show the Flask version.",
+ expose_value=False,
+ callback=get_version,
+ is_flag=True,
+ is_eager=True,
+)
+
+
+class ScriptInfo:
+ """Helper object to deal with Flask applications. This is usually not
+ necessary to interface with as it's used internally in the dispatching
+ to click. In future versions of Flask this object will most likely play
+ a bigger role. Typically it's created automatically by the
+ :class:`FlaskGroup` but you can also manually create it and pass it
+ onwards as click object.
+ """
+
+ def __init__(
+ self,
+ app_import_path: str | None = None,
+ create_app: t.Callable[..., Flask] | None = None,
+ set_debug_flag: bool = True,
+ ) -> None:
+ #: Optionally the import path for the Flask application.
+ self.app_import_path = app_import_path
+ #: Optionally a function that is passed the script info to create
+ #: the instance of the application.
+ self.create_app = create_app
+ #: A dictionary with arbitrary data that can be associated with
+ #: this script info.
+ self.data: dict[t.Any, t.Any] = {}
+ self.set_debug_flag = set_debug_flag
+ self._loaded_app: Flask | None = None
+
+ def load_app(self) -> Flask:
+ """Loads the Flask app (if not yet loaded) and returns it. Calling
+ this multiple times will just result in the already loaded app to
+ be returned.
+ """
+ if self._loaded_app is not None:
+ return self._loaded_app
+
+ if self.create_app is not None:
+ app: Flask | None = self.create_app()
+ else:
+ if self.app_import_path:
+ path, name = (
+ re.split(r":(?![\\/])", self.app_import_path, maxsplit=1) + [None]
+ )[:2]
+ import_name = prepare_import(path)
+ app = locate_app(import_name, name)
+ else:
+ for path in ("wsgi.py", "app.py"):
+ import_name = prepare_import(path)
+ app = locate_app(import_name, None, raise_if_not_found=False)
+
+ if app is not None:
+ break
+
+ if app is None:
+ raise NoAppException(
+ "Could not locate a Flask application. Use the"
+ " 'flask --app' option, 'FLASK_APP' environment"
+ " variable, or a 'wsgi.py' or 'app.py' file in the"
+ " current directory."
+ )
+
+ if self.set_debug_flag:
+ # Update the app's debug flag through the descriptor so that
+ # other values repopulate as well.
+ app.debug = get_debug_flag()
+
+ self._loaded_app = app
+ return app
+
+
+pass_script_info = click.make_pass_decorator(ScriptInfo, ensure=True)
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+
+
+def with_appcontext(f: F) -> F:
+ """Wraps a callback so that it's guaranteed to be executed with the
+ script's application context.
+
+ Custom commands (and their options) registered under ``app.cli`` or
+ ``blueprint.cli`` will always have an app context available, this
+ decorator is not required in that case.
+
+ .. versionchanged:: 2.2
+ The app context is active for subcommands as well as the
+ decorated callback. The app context is always available to
+ ``app.cli`` command and parameter callbacks.
+ """
+
+ @click.pass_context
+ def decorator(ctx: click.Context, /, *args: t.Any, **kwargs: t.Any) -> t.Any:
+ if not current_app:
+ app = ctx.ensure_object(ScriptInfo).load_app()
+ ctx.with_resource(app.app_context())
+
+ return ctx.invoke(f, *args, **kwargs)
+
+ return update_wrapper(decorator, f) # type: ignore[return-value]
+
+
+class AppGroup(click.Group):
+ """This works similar to a regular click :class:`~click.Group` but it
+ changes the behavior of the :meth:`command` decorator so that it
+ automatically wraps the functions in :func:`with_appcontext`.
+
+ Not to be confused with :class:`FlaskGroup`.
+ """
+
+ def command( # type: ignore[override]
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], click.Command]:
+ """This works exactly like the method of the same name on a regular
+ :class:`click.Group` but it wraps callbacks in :func:`with_appcontext`
+ unless it's disabled by passing ``with_appcontext=False``.
+ """
+ wrap_for_ctx = kwargs.pop("with_appcontext", True)
+
+ def decorator(f: t.Callable[..., t.Any]) -> click.Command:
+ if wrap_for_ctx:
+ f = with_appcontext(f)
+ return super(AppGroup, self).command(*args, **kwargs)(f) # type: ignore[no-any-return]
+
+ return decorator
+
+ def group( # type: ignore[override]
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Callable[[t.Callable[..., t.Any]], click.Group]:
+ """This works exactly like the method of the same name on a regular
+ :class:`click.Group` but it defaults the group class to
+ :class:`AppGroup`.
+ """
+ kwargs.setdefault("cls", AppGroup)
+ return super().group(*args, **kwargs) # type: ignore[no-any-return]
+
+
+def _set_app(ctx: click.Context, param: click.Option, value: str | None) -> str | None:
+ if value is None:
+ return None
+
+ info = ctx.ensure_object(ScriptInfo)
+ info.app_import_path = value
+ return value
+
+
+# This option is eager so the app will be available if --help is given.
+# --help is also eager, so --app must be before it in the param list.
+# no_args_is_help bypasses eager processing, so this option must be
+# processed manually in that case to ensure FLASK_APP gets picked up.
+_app_option = click.Option(
+ ["-A", "--app"],
+ metavar="IMPORT",
+ help=(
+ "The Flask application or factory function to load, in the form 'module:name'."
+ " Module can be a dotted import or file path. Name is not required if it is"
+ " 'app', 'application', 'create_app', or 'make_app', and can be 'name(args)' to"
+ " pass arguments."
+ ),
+ is_eager=True,
+ expose_value=False,
+ callback=_set_app,
+)
+
+
+def _set_debug(ctx: click.Context, param: click.Option, value: bool) -> bool | None:
+ # If the flag isn't provided, it will default to False. Don't use
+ # that, let debug be set by env in that case.
+ source = ctx.get_parameter_source(param.name) # type: ignore[arg-type]
+
+ if source is not None and source in (
+ ParameterSource.DEFAULT,
+ ParameterSource.DEFAULT_MAP,
+ ):
+ return None
+
+ # Set with env var instead of ScriptInfo.load so that it can be
+ # accessed early during a factory function.
+ os.environ["FLASK_DEBUG"] = "1" if value else "0"
+ return value
+
+
+_debug_option = click.Option(
+ ["--debug/--no-debug"],
+ help="Set debug mode.",
+ expose_value=False,
+ callback=_set_debug,
+)
+
+
+def _env_file_callback(
+ ctx: click.Context, param: click.Option, value: str | None
+) -> str | None:
+ if value is None:
+ return None
+
+ import importlib
+
+ try:
+ importlib.import_module("dotenv")
+ except ImportError:
+ raise click.BadParameter(
+ "python-dotenv must be installed to load an env file.",
+ ctx=ctx,
+ param=param,
+ ) from None
+
+ # Don't check FLASK_SKIP_DOTENV, that only disables automatically
+ # loading .env and .flaskenv files.
+ load_dotenv(value)
+ return value
+
+
+# This option is eager so env vars are loaded as early as possible to be
+# used by other options.
+_env_file_option = click.Option(
+ ["-e", "--env-file"],
+ type=click.Path(exists=True, dir_okay=False),
+ help="Load environment variables from this file. python-dotenv must be installed.",
+ is_eager=True,
+ expose_value=False,
+ callback=_env_file_callback,
+)
+
+
+class FlaskGroup(AppGroup):
+ """Special subclass of the :class:`AppGroup` group that supports
+ loading more commands from the configured Flask app. Normally a
+ developer does not have to interface with this class but there are
+ some very advanced use cases for which it makes sense to create an
+ instance of this. see :ref:`custom-scripts`.
+
+ :param add_default_commands: if this is True then the default run and
+ shell commands will be added.
+ :param add_version_option: adds the ``--version`` option.
+ :param create_app: an optional callback that is passed the script info and
+ returns the loaded app.
+ :param load_dotenv: Load the nearest :file:`.env` and :file:`.flaskenv`
+ files to set environment variables. Will also change the working
+ directory to the directory containing the first file found.
+ :param set_debug_flag: Set the app's debug flag.
+
+ .. versionchanged:: 2.2
+ Added the ``-A/--app``, ``--debug/--no-debug``, ``-e/--env-file`` options.
+
+ .. versionchanged:: 2.2
+ An app context is pushed when running ``app.cli`` commands, so
+ ``@with_appcontext`` is no longer required for those commands.
+
+ .. versionchanged:: 1.0
+ If installed, python-dotenv will be used to load environment variables
+ from :file:`.env` and :file:`.flaskenv` files.
+ """
+
+ def __init__(
+ self,
+ add_default_commands: bool = True,
+ create_app: t.Callable[..., Flask] | None = None,
+ add_version_option: bool = True,
+ load_dotenv: bool = True,
+ set_debug_flag: bool = True,
+ **extra: t.Any,
+ ) -> None:
+ params = list(extra.pop("params", None) or ())
+ # Processing is done with option callbacks instead of a group
+ # callback. This allows users to make a custom group callback
+ # without losing the behavior. --env-file must come first so
+ # that it is eagerly evaluated before --app.
+ params.extend((_env_file_option, _app_option, _debug_option))
+
+ if add_version_option:
+ params.append(version_option)
+
+ if "context_settings" not in extra:
+ extra["context_settings"] = {}
+
+ extra["context_settings"].setdefault("auto_envvar_prefix", "FLASK")
+
+ super().__init__(params=params, **extra)
+
+ self.create_app = create_app
+ self.load_dotenv = load_dotenv
+ self.set_debug_flag = set_debug_flag
+
+ if add_default_commands:
+ self.add_command(run_command)
+ self.add_command(shell_command)
+ self.add_command(routes_command)
+
+ self._loaded_plugin_commands = False
+
+ def _load_plugin_commands(self) -> None:
+ if self._loaded_plugin_commands:
+ return
+
+ if sys.version_info >= (3, 10):
+ from importlib import metadata
+ else:
+ # Use a backport on Python < 3.10. We technically have
+ # importlib.metadata on 3.8+, but the API changed in 3.10,
+ # so use the backport for consistency.
+ import importlib_metadata as metadata
+
+ for ep in metadata.entry_points(group="flask.commands"):
+ self.add_command(ep.load(), ep.name)
+
+ self._loaded_plugin_commands = True
+
+ def get_command(self, ctx: click.Context, name: str) -> click.Command | None:
+ self._load_plugin_commands()
+ # Look up built-in and plugin commands, which should be
+ # available even if the app fails to load.
+ rv = super().get_command(ctx, name)
+
+ if rv is not None:
+ return rv
+
+ info = ctx.ensure_object(ScriptInfo)
+
+ # Look up commands provided by the app, showing an error and
+ # continuing if the app couldn't be loaded.
+ try:
+ app = info.load_app()
+ except NoAppException as e:
+ click.secho(f"Error: {e.format_message()}\n", err=True, fg="red")
+ return None
+
+ # Push an app context for the loaded app unless it is already
+ # active somehow. This makes the context available to parameter
+ # and command callbacks without needing @with_appcontext.
+ if not current_app or current_app._get_current_object() is not app: # type: ignore[attr-defined]
+ ctx.with_resource(app.app_context())
+
+ return app.cli.get_command(ctx, name)
+
+ def list_commands(self, ctx: click.Context) -> list[str]:
+ self._load_plugin_commands()
+ # Start with the built-in and plugin commands.
+ rv = set(super().list_commands(ctx))
+ info = ctx.ensure_object(ScriptInfo)
+
+ # Add commands provided by the app, showing an error and
+ # continuing if the app couldn't be loaded.
+ try:
+ rv.update(info.load_app().cli.list_commands(ctx))
+ except NoAppException as e:
+ # When an app couldn't be loaded, show the error message
+ # without the traceback.
+ click.secho(f"Error: {e.format_message()}\n", err=True, fg="red")
+ except Exception:
+ # When any other errors occurred during loading, show the
+ # full traceback.
+ click.secho(f"{traceback.format_exc()}\n", err=True, fg="red")
+
+ return sorted(rv)
+
+ def make_context(
+ self,
+ info_name: str | None,
+ args: list[str],
+ parent: click.Context | None = None,
+ **extra: t.Any,
+ ) -> click.Context:
+ # Set a flag to tell app.run to become a no-op. If app.run was
+ # not in a __name__ == __main__ guard, it would start the server
+ # when importing, blocking whatever command is being called.
+ os.environ["FLASK_RUN_FROM_CLI"] = "true"
+
+ # Attempt to load .env and .flask env files. The --env-file
+ # option can cause another file to be loaded.
+ if get_load_dotenv(self.load_dotenv):
+ load_dotenv()
+
+ if "obj" not in extra and "obj" not in self.context_settings:
+ extra["obj"] = ScriptInfo(
+ create_app=self.create_app, set_debug_flag=self.set_debug_flag
+ )
+
+ return super().make_context(info_name, args, parent=parent, **extra)
+
+ def parse_args(self, ctx: click.Context, args: list[str]) -> list[str]:
+ if not args and self.no_args_is_help:
+ # Attempt to load --env-file and --app early in case they
+ # were given as env vars. Otherwise no_args_is_help will not
+ # see commands from app.cli.
+ _env_file_option.handle_parse_result(ctx, {}, [])
+ _app_option.handle_parse_result(ctx, {}, [])
+
+ return super().parse_args(ctx, args)
+
+
+def _path_is_ancestor(path: str, other: str) -> bool:
+ """Take ``other`` and remove the length of ``path`` from it. Then join it
+ to ``path``. If it is the original value, ``path`` is an ancestor of
+ ``other``."""
+ return os.path.join(path, other[len(path) :].lstrip(os.sep)) == other
+
+
+def load_dotenv(path: str | os.PathLike[str] | None = None) -> bool:
+ """Load "dotenv" files in order of precedence to set environment variables.
+
+ If an env var is already set it is not overwritten, so earlier files in the
+ list are preferred over later files.
+
+ This is a no-op if `python-dotenv`_ is not installed.
+
+ .. _python-dotenv: https://github.com/theskumar/python-dotenv#readme
+
+ :param path: Load the file at this location instead of searching.
+ :return: ``True`` if a file was loaded.
+
+ .. versionchanged:: 2.0
+ The current directory is not changed to the location of the
+ loaded file.
+
+ .. versionchanged:: 2.0
+ When loading the env files, set the default encoding to UTF-8.
+
+ .. versionchanged:: 1.1.0
+ Returns ``False`` when python-dotenv is not installed, or when
+ the given path isn't a file.
+
+ .. versionadded:: 1.0
+ """
+ try:
+ import dotenv
+ except ImportError:
+ if path or os.path.isfile(".env") or os.path.isfile(".flaskenv"):
+ click.secho(
+ " * Tip: There are .env or .flaskenv files present."
+ ' Do "pip install python-dotenv" to use them.',
+ fg="yellow",
+ err=True,
+ )
+
+ return False
+
+ # Always return after attempting to load a given path, don't load
+ # the default files.
+ if path is not None:
+ if os.path.isfile(path):
+ return dotenv.load_dotenv(path, encoding="utf-8")
+
+ return False
+
+ loaded = False
+
+ for name in (".env", ".flaskenv"):
+ path = dotenv.find_dotenv(name, usecwd=True)
+
+ if not path:
+ continue
+
+ dotenv.load_dotenv(path, encoding="utf-8")
+ loaded = True
+
+ return loaded # True if at least one file was located and loaded.
+
+
+def show_server_banner(debug: bool, app_import_path: str | None) -> None:
+ """Show extra startup messages the first time the server is run,
+ ignoring the reloader.
+ """
+ if is_running_from_reloader():
+ return
+
+ if app_import_path is not None:
+ click.echo(f" * Serving Flask app '{app_import_path}'")
+
+ if debug is not None:
+ click.echo(f" * Debug mode: {'on' if debug else 'off'}")
+
+
+class CertParamType(click.ParamType):
+ """Click option type for the ``--cert`` option. Allows either an
+ existing file, the string ``'adhoc'``, or an import for a
+ :class:`~ssl.SSLContext` object.
+ """
+
+ name = "path"
+
+ def __init__(self) -> None:
+ self.path_type = click.Path(exists=True, dir_okay=False, resolve_path=True)
+
+ def convert(
+ self, value: t.Any, param: click.Parameter | None, ctx: click.Context | None
+ ) -> t.Any:
+ try:
+ import ssl
+ except ImportError:
+ raise click.BadParameter(
+ 'Using "--cert" requires Python to be compiled with SSL support.',
+ ctx,
+ param,
+ ) from None
+
+ try:
+ return self.path_type(value, param, ctx)
+ except click.BadParameter:
+ value = click.STRING(value, param, ctx).lower()
+
+ if value == "adhoc":
+ try:
+ import cryptography # noqa: F401
+ except ImportError:
+ raise click.BadParameter(
+ "Using ad-hoc certificates requires the cryptography library.",
+ ctx,
+ param,
+ ) from None
+
+ return value
+
+ obj = import_string(value, silent=True)
+
+ if isinstance(obj, ssl.SSLContext):
+ return obj
+
+ raise
+
+
+def _validate_key(ctx: click.Context, param: click.Parameter, value: t.Any) -> t.Any:
+ """The ``--key`` option must be specified when ``--cert`` is a file.
+ Modifies the ``cert`` param to be a ``(cert, key)`` pair if needed.
+ """
+ cert = ctx.params.get("cert")
+ is_adhoc = cert == "adhoc"
+
+ try:
+ import ssl
+ except ImportError:
+ is_context = False
+ else:
+ is_context = isinstance(cert, ssl.SSLContext)
+
+ if value is not None:
+ if is_adhoc:
+ raise click.BadParameter(
+ 'When "--cert" is "adhoc", "--key" is not used.', ctx, param
+ )
+
+ if is_context:
+ raise click.BadParameter(
+ 'When "--cert" is an SSLContext object, "--key" is not used.',
+ ctx,
+ param,
+ )
+
+ if not cert:
+ raise click.BadParameter('"--cert" must also be specified.', ctx, param)
+
+ ctx.params["cert"] = cert, value
+
+ else:
+ if cert and not (is_adhoc or is_context):
+ raise click.BadParameter('Required when using "--cert".', ctx, param)
+
+ return value
+
+
+class SeparatedPathType(click.Path):
+ """Click option type that accepts a list of values separated by the
+ OS's path separator (``:``, ``;`` on Windows). Each value is
+ validated as a :class:`click.Path` type.
+ """
+
+ def convert(
+ self, value: t.Any, param: click.Parameter | None, ctx: click.Context | None
+ ) -> t.Any:
+ items = self.split_envvar_value(value)
+ # can't call no-arg super() inside list comprehension until Python 3.12
+ super_convert = super().convert
+ return [super_convert(item, param, ctx) for item in items]
+
+
+@click.command("run", short_help="Run a development server.")
+@click.option("--host", "-h", default="127.0.0.1", help="The interface to bind to.")
+@click.option("--port", "-p", default=5000, help="The port to bind to.")
+@click.option(
+ "--cert",
+ type=CertParamType(),
+ help="Specify a certificate file to use HTTPS.",
+ is_eager=True,
+)
+@click.option(
+ "--key",
+ type=click.Path(exists=True, dir_okay=False, resolve_path=True),
+ callback=_validate_key,
+ expose_value=False,
+ help="The key file to use when specifying a certificate.",
+)
+@click.option(
+ "--reload/--no-reload",
+ default=None,
+ help="Enable or disable the reloader. By default the reloader "
+ "is active if debug is enabled.",
+)
+@click.option(
+ "--debugger/--no-debugger",
+ default=None,
+ help="Enable or disable the debugger. By default the debugger "
+ "is active if debug is enabled.",
+)
+@click.option(
+ "--with-threads/--without-threads",
+ default=True,
+ help="Enable or disable multithreading.",
+)
+@click.option(
+ "--extra-files",
+ default=None,
+ type=SeparatedPathType(),
+ help=(
+ "Extra files that trigger a reload on change. Multiple paths"
+ f" are separated by {os.path.pathsep!r}."
+ ),
+)
+@click.option(
+ "--exclude-patterns",
+ default=None,
+ type=SeparatedPathType(),
+ help=(
+ "Files matching these fnmatch patterns will not trigger a reload"
+ " on change. Multiple patterns are separated by"
+ f" {os.path.pathsep!r}."
+ ),
+)
+@pass_script_info
+def run_command(
+ info: ScriptInfo,
+ host: str,
+ port: int,
+ reload: bool,
+ debugger: bool,
+ with_threads: bool,
+ cert: ssl.SSLContext | tuple[str, str | None] | t.Literal["adhoc"] | None,
+ extra_files: list[str] | None,
+ exclude_patterns: list[str] | None,
+) -> None:
+ """Run a local development server.
+
+ This server is for development purposes only. It does not provide
+ the stability, security, or performance of production WSGI servers.
+
+ The reloader and debugger are enabled by default with the '--debug'
+ option.
+ """
+ try:
+ app: WSGIApplication = info.load_app()
+ except Exception as e:
+ if is_running_from_reloader():
+ # When reloading, print out the error immediately, but raise
+ # it later so the debugger or server can handle it.
+ traceback.print_exc()
+ err = e
+
+ def app(
+ environ: WSGIEnvironment, start_response: StartResponse
+ ) -> cabc.Iterable[bytes]:
+ raise err from None
+
+ else:
+ # When not reloading, raise the error immediately so the
+ # command fails.
+ raise e from None
+
+ debug = get_debug_flag()
+
+ if reload is None:
+ reload = debug
+
+ if debugger is None:
+ debugger = debug
+
+ show_server_banner(debug, info.app_import_path)
+
+ run_simple(
+ host,
+ port,
+ app,
+ use_reloader=reload,
+ use_debugger=debugger,
+ threaded=with_threads,
+ ssl_context=cert,
+ extra_files=extra_files,
+ exclude_patterns=exclude_patterns,
+ )
+
+
+run_command.params.insert(0, _debug_option)
+
+
+@click.command("shell", short_help="Run a shell in the app context.")
+@with_appcontext
+def shell_command() -> None:
+ """Run an interactive Python shell in the context of a given
+ Flask application. The application will populate the default
+ namespace of this shell according to its configuration.
+
+ This is useful for executing small snippets of management code
+ without having to manually configure the application.
+ """
+ import code
+
+ banner = (
+ f"Python {sys.version} on {sys.platform}\n"
+ f"App: {current_app.import_name}\n"
+ f"Instance: {current_app.instance_path}"
+ )
+ ctx: dict[str, t.Any] = {}
+
+ # Support the regular Python interpreter startup script if someone
+ # is using it.
+ startup = os.environ.get("PYTHONSTARTUP")
+ if startup and os.path.isfile(startup):
+ with open(startup) as f:
+ eval(compile(f.read(), startup, "exec"), ctx)
+
+ ctx.update(current_app.make_shell_context())
+
+ # Site, customize, or startup script can set a hook to call when
+ # entering interactive mode. The default one sets up readline with
+ # tab and history completion.
+ interactive_hook = getattr(sys, "__interactivehook__", None)
+
+ if interactive_hook is not None:
+ try:
+ import readline
+ from rlcompleter import Completer
+ except ImportError:
+ pass
+ else:
+ # rlcompleter uses __main__.__dict__ by default, which is
+ # flask.__main__. Use the shell context instead.
+ readline.set_completer(Completer(ctx).complete)
+
+ interactive_hook()
+
+ code.interact(banner=banner, local=ctx)
+
+
+@click.command("routes", short_help="Show the routes for the app.")
+@click.option(
+ "--sort",
+ "-s",
+ type=click.Choice(("endpoint", "methods", "domain", "rule", "match")),
+ default="endpoint",
+ help=(
+ "Method to sort routes by. 'match' is the order that Flask will match routes"
+ " when dispatching a request."
+ ),
+)
+@click.option("--all-methods", is_flag=True, help="Show HEAD and OPTIONS methods.")
+@with_appcontext
+def routes_command(sort: str, all_methods: bool) -> None:
+ """Show all registered routes with endpoints and methods."""
+ rules = list(current_app.url_map.iter_rules())
+
+ if not rules:
+ click.echo("No routes were registered.")
+ return
+
+ ignored_methods = set() if all_methods else {"HEAD", "OPTIONS"}
+ host_matching = current_app.url_map.host_matching
+ has_domain = any(rule.host if host_matching else rule.subdomain for rule in rules)
+ rows = []
+
+ for rule in rules:
+ row = [
+ rule.endpoint,
+ ", ".join(sorted((rule.methods or set()) - ignored_methods)),
+ ]
+
+ if has_domain:
+ row.append((rule.host if host_matching else rule.subdomain) or "")
+
+ row.append(rule.rule)
+ rows.append(row)
+
+ headers = ["Endpoint", "Methods"]
+ sorts = ["endpoint", "methods"]
+
+ if has_domain:
+ headers.append("Host" if host_matching else "Subdomain")
+ sorts.append("domain")
+
+ headers.append("Rule")
+ sorts.append("rule")
+
+ try:
+ rows.sort(key=itemgetter(sorts.index(sort)))
+ except ValueError:
+ pass
+
+ rows.insert(0, headers)
+ widths = [max(len(row[i]) for row in rows) for i in range(len(headers))]
+ rows.insert(1, ["-" * w for w in widths])
+ template = " ".join(f"{{{i}:<{w}}}" for i, w in enumerate(widths))
+
+ for row in rows:
+ click.echo(template.format(*row))
+
+
+cli = FlaskGroup(
+ name="flask",
+ help="""\
+A general utility script for Flask applications.
+
+An application to load must be given with the '--app' option,
+'FLASK_APP' environment variable, or with a 'wsgi.py' or 'app.py' file
+in the current directory.
+""",
+)
+
+
+def main() -> None:
+ cli.main()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/config.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/config.py
new file mode 100644
index 0000000..7e3ba17
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/config.py
@@ -0,0 +1,370 @@
+from __future__ import annotations
+
+import errno
+import json
+import os
+import types
+import typing as t
+
+from werkzeug.utils import import_string
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .sansio.app import App
+
+
+T = t.TypeVar("T")
+
+
+class ConfigAttribute(t.Generic[T]):
+ """Makes an attribute forward to the config"""
+
+ def __init__(
+ self, name: str, get_converter: t.Callable[[t.Any], T] | None = None
+ ) -> None:
+ self.__name__ = name
+ self.get_converter = get_converter
+
+ @t.overload
+ def __get__(self, obj: None, owner: None) -> te.Self: ...
+
+ @t.overload
+ def __get__(self, obj: App, owner: type[App]) -> T: ...
+
+ def __get__(self, obj: App | None, owner: type[App] | None = None) -> T | te.Self:
+ if obj is None:
+ return self
+
+ rv = obj.config[self.__name__]
+
+ if self.get_converter is not None:
+ rv = self.get_converter(rv)
+
+ return rv # type: ignore[no-any-return]
+
+ def __set__(self, obj: App, value: t.Any) -> None:
+ obj.config[self.__name__] = value
+
+
+class Config(dict): # type: ignore[type-arg]
+ """Works exactly like a dict but provides ways to fill it from files
+ or special dictionaries. There are two common patterns to populate the
+ config.
+
+ Either you can fill the config from a config file::
+
+ app.config.from_pyfile('yourconfig.cfg')
+
+ Or alternatively you can define the configuration options in the
+ module that calls :meth:`from_object` or provide an import path to
+ a module that should be loaded. It is also possible to tell it to
+ use the same module and with that provide the configuration values
+ just before the call::
+
+ DEBUG = True
+ SECRET_KEY = 'development key'
+ app.config.from_object(__name__)
+
+ In both cases (loading from any Python file or loading from modules),
+ only uppercase keys are added to the config. This makes it possible to use
+ lowercase values in the config file for temporary values that are not added
+ to the config or to define the config keys in the same file that implements
+ the application.
+
+ Probably the most interesting way to load configurations is from an
+ environment variable pointing to a file::
+
+ app.config.from_envvar('YOURAPPLICATION_SETTINGS')
+
+ In this case before launching the application you have to set this
+ environment variable to the file you want to use. On Linux and OS X
+ use the export statement::
+
+ export YOURAPPLICATION_SETTINGS='/path/to/config/file'
+
+ On windows use `set` instead.
+
+ :param root_path: path to which files are read relative from. When the
+ config object is created by the application, this is
+ the application's :attr:`~flask.Flask.root_path`.
+ :param defaults: an optional dictionary of default values
+ """
+
+ def __init__(
+ self,
+ root_path: str | os.PathLike[str],
+ defaults: dict[str, t.Any] | None = None,
+ ) -> None:
+ super().__init__(defaults or {})
+ self.root_path = root_path
+
+ def from_envvar(self, variable_name: str, silent: bool = False) -> bool:
+ """Loads a configuration from an environment variable pointing to
+ a configuration file. This is basically just a shortcut with nicer
+ error messages for this line of code::
+
+ app.config.from_pyfile(os.environ['YOURAPPLICATION_SETTINGS'])
+
+ :param variable_name: name of the environment variable
+ :param silent: set to ``True`` if you want silent failure for missing
+ files.
+ :return: ``True`` if the file was loaded successfully.
+ """
+ rv = os.environ.get(variable_name)
+ if not rv:
+ if silent:
+ return False
+ raise RuntimeError(
+ f"The environment variable {variable_name!r} is not set"
+ " and as such configuration could not be loaded. Set"
+ " this variable and make it point to a configuration"
+ " file"
+ )
+ return self.from_pyfile(rv, silent=silent)
+
+ def from_prefixed_env(
+ self, prefix: str = "FLASK", *, loads: t.Callable[[str], t.Any] = json.loads
+ ) -> bool:
+ """Load any environment variables that start with ``FLASK_``,
+ dropping the prefix from the env key for the config key. Values
+ are passed through a loading function to attempt to convert them
+ to more specific types than strings.
+
+ Keys are loaded in :func:`sorted` order.
+
+ The default loading function attempts to parse values as any
+ valid JSON type, including dicts and lists.
+
+ Specific items in nested dicts can be set by separating the
+ keys with double underscores (``__``). If an intermediate key
+ doesn't exist, it will be initialized to an empty dict.
+
+ :param prefix: Load env vars that start with this prefix,
+ separated with an underscore (``_``).
+ :param loads: Pass each string value to this function and use
+ the returned value as the config value. If any error is
+ raised it is ignored and the value remains a string. The
+ default is :func:`json.loads`.
+
+ .. versionadded:: 2.1
+ """
+ prefix = f"{prefix}_"
+ len_prefix = len(prefix)
+
+ for key in sorted(os.environ):
+ if not key.startswith(prefix):
+ continue
+
+ value = os.environ[key]
+
+ try:
+ value = loads(value)
+ except Exception:
+ # Keep the value as a string if loading failed.
+ pass
+
+ # Change to key.removeprefix(prefix) on Python >= 3.9.
+ key = key[len_prefix:]
+
+ if "__" not in key:
+ # A non-nested key, set directly.
+ self[key] = value
+ continue
+
+ # Traverse nested dictionaries with keys separated by "__".
+ current = self
+ *parts, tail = key.split("__")
+
+ for part in parts:
+ # If an intermediate dict does not exist, create it.
+ if part not in current:
+ current[part] = {}
+
+ current = current[part]
+
+ current[tail] = value
+
+ return True
+
+ def from_pyfile(
+ self, filename: str | os.PathLike[str], silent: bool = False
+ ) -> bool:
+ """Updates the values in the config from a Python file. This function
+ behaves as if the file was imported as module with the
+ :meth:`from_object` function.
+
+ :param filename: the filename of the config. This can either be an
+ absolute filename or a filename relative to the
+ root path.
+ :param silent: set to ``True`` if you want silent failure for missing
+ files.
+ :return: ``True`` if the file was loaded successfully.
+
+ .. versionadded:: 0.7
+ `silent` parameter.
+ """
+ filename = os.path.join(self.root_path, filename)
+ d = types.ModuleType("config")
+ d.__file__ = filename
+ try:
+ with open(filename, mode="rb") as config_file:
+ exec(compile(config_file.read(), filename, "exec"), d.__dict__)
+ except OSError as e:
+ if silent and e.errno in (errno.ENOENT, errno.EISDIR, errno.ENOTDIR):
+ return False
+ e.strerror = f"Unable to load configuration file ({e.strerror})"
+ raise
+ self.from_object(d)
+ return True
+
+ def from_object(self, obj: object | str) -> None:
+ """Updates the values from the given object. An object can be of one
+ of the following two types:
+
+ - a string: in this case the object with that name will be imported
+ - an actual object reference: that object is used directly
+
+ Objects are usually either modules or classes. :meth:`from_object`
+ loads only the uppercase attributes of the module/class. A ``dict``
+ object will not work with :meth:`from_object` because the keys of a
+ ``dict`` are not attributes of the ``dict`` class.
+
+ Example of module-based configuration::
+
+ app.config.from_object('yourapplication.default_config')
+ from yourapplication import default_config
+ app.config.from_object(default_config)
+
+ Nothing is done to the object before loading. If the object is a
+ class and has ``@property`` attributes, it needs to be
+ instantiated before being passed to this method.
+
+ You should not use this function to load the actual configuration but
+ rather configuration defaults. The actual config should be loaded
+ with :meth:`from_pyfile` and ideally from a location not within the
+ package because the package might be installed system wide.
+
+ See :ref:`config-dev-prod` for an example of class-based configuration
+ using :meth:`from_object`.
+
+ :param obj: an import name or object
+ """
+ if isinstance(obj, str):
+ obj = import_string(obj)
+ for key in dir(obj):
+ if key.isupper():
+ self[key] = getattr(obj, key)
+
+ def from_file(
+ self,
+ filename: str | os.PathLike[str],
+ load: t.Callable[[t.IO[t.Any]], t.Mapping[str, t.Any]],
+ silent: bool = False,
+ text: bool = True,
+ ) -> bool:
+ """Update the values in the config from a file that is loaded
+ using the ``load`` parameter. The loaded data is passed to the
+ :meth:`from_mapping` method.
+
+ .. code-block:: python
+
+ import json
+ app.config.from_file("config.json", load=json.load)
+
+ import tomllib
+ app.config.from_file("config.toml", load=tomllib.load, text=False)
+
+ :param filename: The path to the data file. This can be an
+ absolute path or relative to the config root path.
+ :param load: A callable that takes a file handle and returns a
+ mapping of loaded data from the file.
+ :type load: ``Callable[[Reader], Mapping]`` where ``Reader``
+ implements a ``read`` method.
+ :param silent: Ignore the file if it doesn't exist.
+ :param text: Open the file in text or binary mode.
+ :return: ``True`` if the file was loaded successfully.
+
+ .. versionchanged:: 2.3
+ The ``text`` parameter was added.
+
+ .. versionadded:: 2.0
+ """
+ filename = os.path.join(self.root_path, filename)
+
+ try:
+ with open(filename, "r" if text else "rb") as f:
+ obj = load(f)
+ except OSError as e:
+ if silent and e.errno in (errno.ENOENT, errno.EISDIR):
+ return False
+
+ e.strerror = f"Unable to load configuration file ({e.strerror})"
+ raise
+
+ return self.from_mapping(obj)
+
+ def from_mapping(
+ self, mapping: t.Mapping[str, t.Any] | None = None, **kwargs: t.Any
+ ) -> bool:
+ """Updates the config like :meth:`update` ignoring items with
+ non-upper keys.
+
+ :return: Always returns ``True``.
+
+ .. versionadded:: 0.11
+ """
+ mappings: dict[str, t.Any] = {}
+ if mapping is not None:
+ mappings.update(mapping)
+ mappings.update(kwargs)
+ for key, value in mappings.items():
+ if key.isupper():
+ self[key] = value
+ return True
+
+ def get_namespace(
+ self, namespace: str, lowercase: bool = True, trim_namespace: bool = True
+ ) -> dict[str, t.Any]:
+ """Returns a dictionary containing a subset of configuration options
+ that match the specified namespace/prefix. Example usage::
+
+ app.config['IMAGE_STORE_TYPE'] = 'fs'
+ app.config['IMAGE_STORE_PATH'] = '/var/app/images'
+ app.config['IMAGE_STORE_BASE_URL'] = 'http://img.website.com'
+ image_store_config = app.config.get_namespace('IMAGE_STORE_')
+
+ The resulting dictionary `image_store_config` would look like::
+
+ {
+ 'type': 'fs',
+ 'path': '/var/app/images',
+ 'base_url': 'http://img.website.com'
+ }
+
+ This is often useful when configuration options map directly to
+ keyword arguments in functions or class constructors.
+
+ :param namespace: a configuration namespace
+ :param lowercase: a flag indicating if the keys of the resulting
+ dictionary should be lowercase
+ :param trim_namespace: a flag indicating if the keys of the resulting
+ dictionary should not include the namespace
+
+ .. versionadded:: 0.11
+ """
+ rv = {}
+ for k, v in self.items():
+ if not k.startswith(namespace):
+ continue
+ if trim_namespace:
+ key = k[len(namespace) :]
+ else:
+ key = k
+ if lowercase:
+ key = key.lower()
+ rv[key] = v
+ return rv
+
+ def __repr__(self) -> str:
+ return f"<{type(self).__name__} {dict.__repr__(self)}>"
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/ctx.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/ctx.py
new file mode 100644
index 0000000..9b164d3
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/ctx.py
@@ -0,0 +1,449 @@
+from __future__ import annotations
+
+import contextvars
+import sys
+import typing as t
+from functools import update_wrapper
+from types import TracebackType
+
+from werkzeug.exceptions import HTTPException
+
+from . import typing as ft
+from .globals import _cv_app
+from .globals import _cv_request
+from .signals import appcontext_popped
+from .signals import appcontext_pushed
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from _typeshed.wsgi import WSGIEnvironment
+
+ from .app import Flask
+ from .sessions import SessionMixin
+ from .wrappers import Request
+
+
+# a singleton sentinel value for parameter defaults
+_sentinel = object()
+
+
+class _AppCtxGlobals:
+ """A plain object. Used as a namespace for storing data during an
+ application context.
+
+ Creating an app context automatically creates this object, which is
+ made available as the :data:`g` proxy.
+
+ .. describe:: 'key' in g
+
+ Check whether an attribute is present.
+
+ .. versionadded:: 0.10
+
+ .. describe:: iter(g)
+
+ Return an iterator over the attribute names.
+
+ .. versionadded:: 0.10
+ """
+
+ # Define attr methods to let mypy know this is a namespace object
+ # that has arbitrary attributes.
+
+ def __getattr__(self, name: str) -> t.Any:
+ try:
+ return self.__dict__[name]
+ except KeyError:
+ raise AttributeError(name) from None
+
+ def __setattr__(self, name: str, value: t.Any) -> None:
+ self.__dict__[name] = value
+
+ def __delattr__(self, name: str) -> None:
+ try:
+ del self.__dict__[name]
+ except KeyError:
+ raise AttributeError(name) from None
+
+ def get(self, name: str, default: t.Any | None = None) -> t.Any:
+ """Get an attribute by name, or a default value. Like
+ :meth:`dict.get`.
+
+ :param name: Name of attribute to get.
+ :param default: Value to return if the attribute is not present.
+
+ .. versionadded:: 0.10
+ """
+ return self.__dict__.get(name, default)
+
+ def pop(self, name: str, default: t.Any = _sentinel) -> t.Any:
+ """Get and remove an attribute by name. Like :meth:`dict.pop`.
+
+ :param name: Name of attribute to pop.
+ :param default: Value to return if the attribute is not present,
+ instead of raising a ``KeyError``.
+
+ .. versionadded:: 0.11
+ """
+ if default is _sentinel:
+ return self.__dict__.pop(name)
+ else:
+ return self.__dict__.pop(name, default)
+
+ def setdefault(self, name: str, default: t.Any = None) -> t.Any:
+ """Get the value of an attribute if it is present, otherwise
+ set and return a default value. Like :meth:`dict.setdefault`.
+
+ :param name: Name of attribute to get.
+ :param default: Value to set and return if the attribute is not
+ present.
+
+ .. versionadded:: 0.11
+ """
+ return self.__dict__.setdefault(name, default)
+
+ def __contains__(self, item: str) -> bool:
+ return item in self.__dict__
+
+ def __iter__(self) -> t.Iterator[str]:
+ return iter(self.__dict__)
+
+ def __repr__(self) -> str:
+ ctx = _cv_app.get(None)
+ if ctx is not None:
+ return f""
+ return object.__repr__(self)
+
+
+def after_this_request(
+ f: ft.AfterRequestCallable[t.Any],
+) -> ft.AfterRequestCallable[t.Any]:
+ """Executes a function after this request. This is useful to modify
+ response objects. The function is passed the response object and has
+ to return the same or a new one.
+
+ Example::
+
+ @app.route('/')
+ def index():
+ @after_this_request
+ def add_header(response):
+ response.headers['X-Foo'] = 'Parachute'
+ return response
+ return 'Hello World!'
+
+ This is more useful if a function other than the view function wants to
+ modify a response. For instance think of a decorator that wants to add
+ some headers without converting the return value into a response object.
+
+ .. versionadded:: 0.9
+ """
+ ctx = _cv_request.get(None)
+
+ if ctx is None:
+ raise RuntimeError(
+ "'after_this_request' can only be used when a request"
+ " context is active, such as in a view function."
+ )
+
+ ctx._after_request_functions.append(f)
+ return f
+
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+
+
+def copy_current_request_context(f: F) -> F:
+ """A helper function that decorates a function to retain the current
+ request context. This is useful when working with greenlets. The moment
+ the function is decorated a copy of the request context is created and
+ then pushed when the function is called. The current session is also
+ included in the copied request context.
+
+ Example::
+
+ import gevent
+ from flask import copy_current_request_context
+
+ @app.route('/')
+ def index():
+ @copy_current_request_context
+ def do_some_work():
+ # do some work here, it can access flask.request or
+ # flask.session like you would otherwise in the view function.
+ ...
+ gevent.spawn(do_some_work)
+ return 'Regular response'
+
+ .. versionadded:: 0.10
+ """
+ ctx = _cv_request.get(None)
+
+ if ctx is None:
+ raise RuntimeError(
+ "'copy_current_request_context' can only be used when a"
+ " request context is active, such as in a view function."
+ )
+
+ ctx = ctx.copy()
+
+ def wrapper(*args: t.Any, **kwargs: t.Any) -> t.Any:
+ with ctx: # type: ignore[union-attr]
+ return ctx.app.ensure_sync(f)(*args, **kwargs) # type: ignore[union-attr]
+
+ return update_wrapper(wrapper, f) # type: ignore[return-value]
+
+
+def has_request_context() -> bool:
+ """If you have code that wants to test if a request context is there or
+ not this function can be used. For instance, you may want to take advantage
+ of request information if the request object is available, but fail
+ silently if it is unavailable.
+
+ ::
+
+ class User(db.Model):
+
+ def __init__(self, username, remote_addr=None):
+ self.username = username
+ if remote_addr is None and has_request_context():
+ remote_addr = request.remote_addr
+ self.remote_addr = remote_addr
+
+ Alternatively you can also just test any of the context bound objects
+ (such as :class:`request` or :class:`g`) for truthness::
+
+ class User(db.Model):
+
+ def __init__(self, username, remote_addr=None):
+ self.username = username
+ if remote_addr is None and request:
+ remote_addr = request.remote_addr
+ self.remote_addr = remote_addr
+
+ .. versionadded:: 0.7
+ """
+ return _cv_request.get(None) is not None
+
+
+def has_app_context() -> bool:
+ """Works like :func:`has_request_context` but for the application
+ context. You can also just do a boolean check on the
+ :data:`current_app` object instead.
+
+ .. versionadded:: 0.9
+ """
+ return _cv_app.get(None) is not None
+
+
+class AppContext:
+ """The app context contains application-specific information. An app
+ context is created and pushed at the beginning of each request if
+ one is not already active. An app context is also pushed when
+ running CLI commands.
+ """
+
+ def __init__(self, app: Flask) -> None:
+ self.app = app
+ self.url_adapter = app.create_url_adapter(None)
+ self.g: _AppCtxGlobals = app.app_ctx_globals_class()
+ self._cv_tokens: list[contextvars.Token[AppContext]] = []
+
+ def push(self) -> None:
+ """Binds the app context to the current context."""
+ self._cv_tokens.append(_cv_app.set(self))
+ appcontext_pushed.send(self.app, _async_wrapper=self.app.ensure_sync)
+
+ def pop(self, exc: BaseException | None = _sentinel) -> None: # type: ignore
+ """Pops the app context."""
+ try:
+ if len(self._cv_tokens) == 1:
+ if exc is _sentinel:
+ exc = sys.exc_info()[1]
+ self.app.do_teardown_appcontext(exc)
+ finally:
+ ctx = _cv_app.get()
+ _cv_app.reset(self._cv_tokens.pop())
+
+ if ctx is not self:
+ raise AssertionError(
+ f"Popped wrong app context. ({ctx!r} instead of {self!r})"
+ )
+
+ appcontext_popped.send(self.app, _async_wrapper=self.app.ensure_sync)
+
+ def __enter__(self) -> AppContext:
+ self.push()
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.pop(exc_value)
+
+
+class RequestContext:
+ """The request context contains per-request information. The Flask
+ app creates and pushes it at the beginning of the request, then pops
+ it at the end of the request. It will create the URL adapter and
+ request object for the WSGI environment provided.
+
+ Do not attempt to use this class directly, instead use
+ :meth:`~flask.Flask.test_request_context` and
+ :meth:`~flask.Flask.request_context` to create this object.
+
+ When the request context is popped, it will evaluate all the
+ functions registered on the application for teardown execution
+ (:meth:`~flask.Flask.teardown_request`).
+
+ The request context is automatically popped at the end of the
+ request. When using the interactive debugger, the context will be
+ restored so ``request`` is still accessible. Similarly, the test
+ client can preserve the context after the request ends. However,
+ teardown functions may already have closed some resources such as
+ database connections.
+ """
+
+ def __init__(
+ self,
+ app: Flask,
+ environ: WSGIEnvironment,
+ request: Request | None = None,
+ session: SessionMixin | None = None,
+ ) -> None:
+ self.app = app
+ if request is None:
+ request = app.request_class(environ)
+ request.json_module = app.json
+ self.request: Request = request
+ self.url_adapter = None
+ try:
+ self.url_adapter = app.create_url_adapter(self.request)
+ except HTTPException as e:
+ self.request.routing_exception = e
+ self.flashes: list[tuple[str, str]] | None = None
+ self.session: SessionMixin | None = session
+ # Functions that should be executed after the request on the response
+ # object. These will be called before the regular "after_request"
+ # functions.
+ self._after_request_functions: list[ft.AfterRequestCallable[t.Any]] = []
+
+ self._cv_tokens: list[
+ tuple[contextvars.Token[RequestContext], AppContext | None]
+ ] = []
+
+ def copy(self) -> RequestContext:
+ """Creates a copy of this request context with the same request object.
+ This can be used to move a request context to a different greenlet.
+ Because the actual request object is the same this cannot be used to
+ move a request context to a different thread unless access to the
+ request object is locked.
+
+ .. versionadded:: 0.10
+
+ .. versionchanged:: 1.1
+ The current session object is used instead of reloading the original
+ data. This prevents `flask.session` pointing to an out-of-date object.
+ """
+ return self.__class__(
+ self.app,
+ environ=self.request.environ,
+ request=self.request,
+ session=self.session,
+ )
+
+ def match_request(self) -> None:
+ """Can be overridden by a subclass to hook into the matching
+ of the request.
+ """
+ try:
+ result = self.url_adapter.match(return_rule=True) # type: ignore
+ self.request.url_rule, self.request.view_args = result # type: ignore
+ except HTTPException as e:
+ self.request.routing_exception = e
+
+ def push(self) -> None:
+ # Before we push the request context we have to ensure that there
+ # is an application context.
+ app_ctx = _cv_app.get(None)
+
+ if app_ctx is None or app_ctx.app is not self.app:
+ app_ctx = self.app.app_context()
+ app_ctx.push()
+ else:
+ app_ctx = None
+
+ self._cv_tokens.append((_cv_request.set(self), app_ctx))
+
+ # Open the session at the moment that the request context is available.
+ # This allows a custom open_session method to use the request context.
+ # Only open a new session if this is the first time the request was
+ # pushed, otherwise stream_with_context loses the session.
+ if self.session is None:
+ session_interface = self.app.session_interface
+ self.session = session_interface.open_session(self.app, self.request)
+
+ if self.session is None:
+ self.session = session_interface.make_null_session(self.app)
+
+ # Match the request URL after loading the session, so that the
+ # session is available in custom URL converters.
+ if self.url_adapter is not None:
+ self.match_request()
+
+ def pop(self, exc: BaseException | None = _sentinel) -> None: # type: ignore
+ """Pops the request context and unbinds it by doing that. This will
+ also trigger the execution of functions registered by the
+ :meth:`~flask.Flask.teardown_request` decorator.
+
+ .. versionchanged:: 0.9
+ Added the `exc` argument.
+ """
+ clear_request = len(self._cv_tokens) == 1
+
+ try:
+ if clear_request:
+ if exc is _sentinel:
+ exc = sys.exc_info()[1]
+ self.app.do_teardown_request(exc)
+
+ request_close = getattr(self.request, "close", None)
+ if request_close is not None:
+ request_close()
+ finally:
+ ctx = _cv_request.get()
+ token, app_ctx = self._cv_tokens.pop()
+ _cv_request.reset(token)
+
+ # get rid of circular dependencies at the end of the request
+ # so that we don't require the GC to be active.
+ if clear_request:
+ ctx.request.environ["werkzeug.request"] = None
+
+ if app_ctx is not None:
+ app_ctx.pop(exc)
+
+ if ctx is not self:
+ raise AssertionError(
+ f"Popped wrong request context. ({ctx!r} instead of {self!r})"
+ )
+
+ def __enter__(self) -> RequestContext:
+ self.push()
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.pop(exc_value)
+
+ def __repr__(self) -> str:
+ return (
+ f"<{type(self).__name__} {self.request.url!r}"
+ f" [{self.request.method}] of {self.app.name}>"
+ )
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/debughelpers.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/debughelpers.py
new file mode 100644
index 0000000..2c8c4c4
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/debughelpers.py
@@ -0,0 +1,178 @@
+from __future__ import annotations
+
+import typing as t
+
+from jinja2.loaders import BaseLoader
+from werkzeug.routing import RequestRedirect
+
+from .blueprints import Blueprint
+from .globals import request_ctx
+from .sansio.app import App
+
+if t.TYPE_CHECKING:
+ from .sansio.scaffold import Scaffold
+ from .wrappers import Request
+
+
+class UnexpectedUnicodeError(AssertionError, UnicodeError):
+ """Raised in places where we want some better error reporting for
+ unexpected unicode or binary data.
+ """
+
+
+class DebugFilesKeyError(KeyError, AssertionError):
+ """Raised from request.files during debugging. The idea is that it can
+ provide a better error message than just a generic KeyError/BadRequest.
+ """
+
+ def __init__(self, request: Request, key: str) -> None:
+ form_matches = request.form.getlist(key)
+ buf = [
+ f"You tried to access the file {key!r} in the request.files"
+ " dictionary but it does not exist. The mimetype for the"
+ f" request is {request.mimetype!r} instead of"
+ " 'multipart/form-data' which means that no file contents"
+ " were transmitted. To fix this error you should provide"
+ ' enctype="multipart/form-data" in your form.'
+ ]
+ if form_matches:
+ names = ", ".join(repr(x) for x in form_matches)
+ buf.append(
+ "\n\nThe browser instead transmitted some file names. "
+ f"This was submitted: {names}"
+ )
+ self.msg = "".join(buf)
+
+ def __str__(self) -> str:
+ return self.msg
+
+
+class FormDataRoutingRedirect(AssertionError):
+ """This exception is raised in debug mode if a routing redirect
+ would cause the browser to drop the method or body. This happens
+ when method is not GET, HEAD or OPTIONS and the status code is not
+ 307 or 308.
+ """
+
+ def __init__(self, request: Request) -> None:
+ exc = request.routing_exception
+ assert isinstance(exc, RequestRedirect)
+ buf = [
+ f"A request was sent to '{request.url}', but routing issued"
+ f" a redirect to the canonical URL '{exc.new_url}'."
+ ]
+
+ if f"{request.base_url}/" == exc.new_url.partition("?")[0]:
+ buf.append(
+ " The URL was defined with a trailing slash. Flask"
+ " will redirect to the URL with a trailing slash if it"
+ " was accessed without one."
+ )
+
+ buf.append(
+ " Send requests to the canonical URL, or use 307 or 308 for"
+ " routing redirects. Otherwise, browsers will drop form"
+ " data.\n\n"
+ "This exception is only raised in debug mode."
+ )
+ super().__init__("".join(buf))
+
+
+def attach_enctype_error_multidict(request: Request) -> None:
+ """Patch ``request.files.__getitem__`` to raise a descriptive error
+ about ``enctype=multipart/form-data``.
+
+ :param request: The request to patch.
+ :meta private:
+ """
+ oldcls = request.files.__class__
+
+ class newcls(oldcls): # type: ignore[valid-type, misc]
+ def __getitem__(self, key: str) -> t.Any:
+ try:
+ return super().__getitem__(key)
+ except KeyError as e:
+ if key not in request.form:
+ raise
+
+ raise DebugFilesKeyError(request, key).with_traceback(
+ e.__traceback__
+ ) from None
+
+ newcls.__name__ = oldcls.__name__
+ newcls.__module__ = oldcls.__module__
+ request.files.__class__ = newcls
+
+
+def _dump_loader_info(loader: BaseLoader) -> t.Iterator[str]:
+ yield f"class: {type(loader).__module__}.{type(loader).__name__}"
+ for key, value in sorted(loader.__dict__.items()):
+ if key.startswith("_"):
+ continue
+ if isinstance(value, (tuple, list)):
+ if not all(isinstance(x, str) for x in value):
+ continue
+ yield f"{key}:"
+ for item in value:
+ yield f" - {item}"
+ continue
+ elif not isinstance(value, (str, int, float, bool)):
+ continue
+ yield f"{key}: {value!r}"
+
+
+def explain_template_loading_attempts(
+ app: App,
+ template: str,
+ attempts: list[
+ tuple[
+ BaseLoader,
+ Scaffold,
+ tuple[str, str | None, t.Callable[[], bool] | None] | None,
+ ]
+ ],
+) -> None:
+ """This should help developers understand what failed"""
+ info = [f"Locating template {template!r}:"]
+ total_found = 0
+ blueprint = None
+ if request_ctx and request_ctx.request.blueprint is not None:
+ blueprint = request_ctx.request.blueprint
+
+ for idx, (loader, srcobj, triple) in enumerate(attempts):
+ if isinstance(srcobj, App):
+ src_info = f"application {srcobj.import_name!r}"
+ elif isinstance(srcobj, Blueprint):
+ src_info = f"blueprint {srcobj.name!r} ({srcobj.import_name})"
+ else:
+ src_info = repr(srcobj)
+
+ info.append(f"{idx + 1:5}: trying loader of {src_info}")
+
+ for line in _dump_loader_info(loader):
+ info.append(f" {line}")
+
+ if triple is None:
+ detail = "no match"
+ else:
+ detail = f"found ({triple[1] or ''!r})"
+ total_found += 1
+ info.append(f" -> {detail}")
+
+ seems_fishy = False
+ if total_found == 0:
+ info.append("Error: the template could not be found.")
+ seems_fishy = True
+ elif total_found > 1:
+ info.append("Warning: multiple loaders returned a match for the template.")
+ seems_fishy = True
+
+ if blueprint is not None and seems_fishy:
+ info.append(
+ " The template was looked up from an endpoint that belongs"
+ f" to the blueprint {blueprint!r}."
+ )
+ info.append(" Maybe you did not place a template in the right folder?")
+ info.append(" See https://flask.palletsprojects.com/blueprints/#templates")
+
+ app.logger.info("\n".join(info))
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/globals.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/globals.py
new file mode 100644
index 0000000..e2c410c
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/globals.py
@@ -0,0 +1,51 @@
+from __future__ import annotations
+
+import typing as t
+from contextvars import ContextVar
+
+from werkzeug.local import LocalProxy
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .app import Flask
+ from .ctx import _AppCtxGlobals
+ from .ctx import AppContext
+ from .ctx import RequestContext
+ from .sessions import SessionMixin
+ from .wrappers import Request
+
+
+_no_app_msg = """\
+Working outside of application context.
+
+This typically means that you attempted to use functionality that needed
+the current application. To solve this, set up an application context
+with app.app_context(). See the documentation for more information.\
+"""
+_cv_app: ContextVar[AppContext] = ContextVar("flask.app_ctx")
+app_ctx: AppContext = LocalProxy( # type: ignore[assignment]
+ _cv_app, unbound_message=_no_app_msg
+)
+current_app: Flask = LocalProxy( # type: ignore[assignment]
+ _cv_app, "app", unbound_message=_no_app_msg
+)
+g: _AppCtxGlobals = LocalProxy( # type: ignore[assignment]
+ _cv_app, "g", unbound_message=_no_app_msg
+)
+
+_no_req_msg = """\
+Working outside of request context.
+
+This typically means that you attempted to use functionality that needed
+an active HTTP request. Consult the documentation on testing for
+information about how to avoid this problem.\
+"""
+_cv_request: ContextVar[RequestContext] = ContextVar("flask.request_ctx")
+request_ctx: RequestContext = LocalProxy( # type: ignore[assignment]
+ _cv_request, unbound_message=_no_req_msg
+)
+request: Request = LocalProxy( # type: ignore[assignment]
+ _cv_request, "request", unbound_message=_no_req_msg
+)
+session: SessionMixin = LocalProxy( # type: ignore[assignment]
+ _cv_request, "session", unbound_message=_no_req_msg
+)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/helpers.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/helpers.py
new file mode 100644
index 0000000..359a842
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/helpers.py
@@ -0,0 +1,621 @@
+from __future__ import annotations
+
+import importlib.util
+import os
+import sys
+import typing as t
+from datetime import datetime
+from functools import lru_cache
+from functools import update_wrapper
+
+import werkzeug.utils
+from werkzeug.exceptions import abort as _wz_abort
+from werkzeug.utils import redirect as _wz_redirect
+from werkzeug.wrappers import Response as BaseResponse
+
+from .globals import _cv_request
+from .globals import current_app
+from .globals import request
+from .globals import request_ctx
+from .globals import session
+from .signals import message_flashed
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .wrappers import Response
+
+
+def get_debug_flag() -> bool:
+ """Get whether debug mode should be enabled for the app, indicated by the
+ :envvar:`FLASK_DEBUG` environment variable. The default is ``False``.
+ """
+ val = os.environ.get("FLASK_DEBUG")
+ return bool(val and val.lower() not in {"0", "false", "no"})
+
+
+def get_load_dotenv(default: bool = True) -> bool:
+ """Get whether the user has disabled loading default dotenv files by
+ setting :envvar:`FLASK_SKIP_DOTENV`. The default is ``True``, load
+ the files.
+
+ :param default: What to return if the env var isn't set.
+ """
+ val = os.environ.get("FLASK_SKIP_DOTENV")
+
+ if not val:
+ return default
+
+ return val.lower() in ("0", "false", "no")
+
+
+def stream_with_context(
+ generator_or_function: t.Iterator[t.AnyStr] | t.Callable[..., t.Iterator[t.AnyStr]],
+) -> t.Iterator[t.AnyStr]:
+ """Request contexts disappear when the response is started on the server.
+ This is done for efficiency reasons and to make it less likely to encounter
+ memory leaks with badly written WSGI middlewares. The downside is that if
+ you are using streamed responses, the generator cannot access request bound
+ information any more.
+
+ This function however can help you keep the context around for longer::
+
+ from flask import stream_with_context, request, Response
+
+ @app.route('/stream')
+ def streamed_response():
+ @stream_with_context
+ def generate():
+ yield 'Hello '
+ yield request.args['name']
+ yield '!'
+ return Response(generate())
+
+ Alternatively it can also be used around a specific generator::
+
+ from flask import stream_with_context, request, Response
+
+ @app.route('/stream')
+ def streamed_response():
+ def generate():
+ yield 'Hello '
+ yield request.args['name']
+ yield '!'
+ return Response(stream_with_context(generate()))
+
+ .. versionadded:: 0.9
+ """
+ try:
+ gen = iter(generator_or_function) # type: ignore[arg-type]
+ except TypeError:
+
+ def decorator(*args: t.Any, **kwargs: t.Any) -> t.Any:
+ gen = generator_or_function(*args, **kwargs) # type: ignore[operator]
+ return stream_with_context(gen)
+
+ return update_wrapper(decorator, generator_or_function) # type: ignore[arg-type]
+
+ def generator() -> t.Iterator[t.AnyStr | None]:
+ ctx = _cv_request.get(None)
+ if ctx is None:
+ raise RuntimeError(
+ "'stream_with_context' can only be used when a request"
+ " context is active, such as in a view function."
+ )
+ with ctx:
+ # Dummy sentinel. Has to be inside the context block or we're
+ # not actually keeping the context around.
+ yield None
+
+ # The try/finally is here so that if someone passes a WSGI level
+ # iterator in we're still running the cleanup logic. Generators
+ # don't need that because they are closed on their destruction
+ # automatically.
+ try:
+ yield from gen
+ finally:
+ if hasattr(gen, "close"):
+ gen.close()
+
+ # The trick is to start the generator. Then the code execution runs until
+ # the first dummy None is yielded at which point the context was already
+ # pushed. This item is discarded. Then when the iteration continues the
+ # real generator is executed.
+ wrapped_g = generator()
+ next(wrapped_g)
+ return wrapped_g # type: ignore[return-value]
+
+
+def make_response(*args: t.Any) -> Response:
+ """Sometimes it is necessary to set additional headers in a view. Because
+ views do not have to return response objects but can return a value that
+ is converted into a response object by Flask itself, it becomes tricky to
+ add headers to it. This function can be called instead of using a return
+ and you will get a response object which you can use to attach headers.
+
+ If view looked like this and you want to add a new header::
+
+ def index():
+ return render_template('index.html', foo=42)
+
+ You can now do something like this::
+
+ def index():
+ response = make_response(render_template('index.html', foo=42))
+ response.headers['X-Parachutes'] = 'parachutes are cool'
+ return response
+
+ This function accepts the very same arguments you can return from a
+ view function. This for example creates a response with a 404 error
+ code::
+
+ response = make_response(render_template('not_found.html'), 404)
+
+ The other use case of this function is to force the return value of a
+ view function into a response which is helpful with view
+ decorators::
+
+ response = make_response(view_function())
+ response.headers['X-Parachutes'] = 'parachutes are cool'
+
+ Internally this function does the following things:
+
+ - if no arguments are passed, it creates a new response argument
+ - if one argument is passed, :meth:`flask.Flask.make_response`
+ is invoked with it.
+ - if more than one argument is passed, the arguments are passed
+ to the :meth:`flask.Flask.make_response` function as tuple.
+
+ .. versionadded:: 0.6
+ """
+ if not args:
+ return current_app.response_class()
+ if len(args) == 1:
+ args = args[0]
+ return current_app.make_response(args)
+
+
+def url_for(
+ endpoint: str,
+ *,
+ _anchor: str | None = None,
+ _method: str | None = None,
+ _scheme: str | None = None,
+ _external: bool | None = None,
+ **values: t.Any,
+) -> str:
+ """Generate a URL to the given endpoint with the given values.
+
+ This requires an active request or application context, and calls
+ :meth:`current_app.url_for() `. See that method
+ for full documentation.
+
+ :param endpoint: The endpoint name associated with the URL to
+ generate. If this starts with a ``.``, the current blueprint
+ name (if any) will be used.
+ :param _anchor: If given, append this as ``#anchor`` to the URL.
+ :param _method: If given, generate the URL associated with this
+ method for the endpoint.
+ :param _scheme: If given, the URL will have this scheme if it is
+ external.
+ :param _external: If given, prefer the URL to be internal (False) or
+ require it to be external (True). External URLs include the
+ scheme and domain. When not in an active request, URLs are
+ external by default.
+ :param values: Values to use for the variable parts of the URL rule.
+ Unknown keys are appended as query string arguments, like
+ ``?a=b&c=d``.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.url_for``, allowing an app to override the
+ behavior.
+
+ .. versionchanged:: 0.10
+ The ``_scheme`` parameter was added.
+
+ .. versionchanged:: 0.9
+ The ``_anchor`` and ``_method`` parameters were added.
+
+ .. versionchanged:: 0.9
+ Calls ``app.handle_url_build_error`` on build errors.
+ """
+ return current_app.url_for(
+ endpoint,
+ _anchor=_anchor,
+ _method=_method,
+ _scheme=_scheme,
+ _external=_external,
+ **values,
+ )
+
+
+def redirect(
+ location: str, code: int = 302, Response: type[BaseResponse] | None = None
+) -> BaseResponse:
+ """Create a redirect response object.
+
+ If :data:`~flask.current_app` is available, it will use its
+ :meth:`~flask.Flask.redirect` method, otherwise it will use
+ :func:`werkzeug.utils.redirect`.
+
+ :param location: The URL to redirect to.
+ :param code: The status code for the redirect.
+ :param Response: The response class to use. Not used when
+ ``current_app`` is active, which uses ``app.response_class``.
+
+ .. versionadded:: 2.2
+ Calls ``current_app.redirect`` if available instead of always
+ using Werkzeug's default ``redirect``.
+ """
+ if current_app:
+ return current_app.redirect(location, code=code)
+
+ return _wz_redirect(location, code=code, Response=Response)
+
+
+def abort(code: int | BaseResponse, *args: t.Any, **kwargs: t.Any) -> t.NoReturn:
+ """Raise an :exc:`~werkzeug.exceptions.HTTPException` for the given
+ status code.
+
+ If :data:`~flask.current_app` is available, it will call its
+ :attr:`~flask.Flask.aborter` object, otherwise it will use
+ :func:`werkzeug.exceptions.abort`.
+
+ :param code: The status code for the exception, which must be
+ registered in ``app.aborter``.
+ :param args: Passed to the exception.
+ :param kwargs: Passed to the exception.
+
+ .. versionadded:: 2.2
+ Calls ``current_app.aborter`` if available instead of always
+ using Werkzeug's default ``abort``.
+ """
+ if current_app:
+ current_app.aborter(code, *args, **kwargs)
+
+ _wz_abort(code, *args, **kwargs)
+
+
+def get_template_attribute(template_name: str, attribute: str) -> t.Any:
+ """Loads a macro (or variable) a template exports. This can be used to
+ invoke a macro from within Python code. If you for example have a
+ template named :file:`_cider.html` with the following contents:
+
+ .. sourcecode:: html+jinja
+
+ {% macro hello(name) %}Hello {{ name }}!{% endmacro %}
+
+ You can access this from Python code like this::
+
+ hello = get_template_attribute('_cider.html', 'hello')
+ return hello('World')
+
+ .. versionadded:: 0.2
+
+ :param template_name: the name of the template
+ :param attribute: the name of the variable of macro to access
+ """
+ return getattr(current_app.jinja_env.get_template(template_name).module, attribute)
+
+
+def flash(message: str, category: str = "message") -> None:
+ """Flashes a message to the next request. In order to remove the
+ flashed message from the session and to display it to the user,
+ the template has to call :func:`get_flashed_messages`.
+
+ .. versionchanged:: 0.3
+ `category` parameter added.
+
+ :param message: the message to be flashed.
+ :param category: the category for the message. The following values
+ are recommended: ``'message'`` for any kind of message,
+ ``'error'`` for errors, ``'info'`` for information
+ messages and ``'warning'`` for warnings. However any
+ kind of string can be used as category.
+ """
+ # Original implementation:
+ #
+ # session.setdefault('_flashes', []).append((category, message))
+ #
+ # This assumed that changes made to mutable structures in the session are
+ # always in sync with the session object, which is not true for session
+ # implementations that use external storage for keeping their keys/values.
+ flashes = session.get("_flashes", [])
+ flashes.append((category, message))
+ session["_flashes"] = flashes
+ app = current_app._get_current_object() # type: ignore
+ message_flashed.send(
+ app,
+ _async_wrapper=app.ensure_sync,
+ message=message,
+ category=category,
+ )
+
+
+def get_flashed_messages(
+ with_categories: bool = False, category_filter: t.Iterable[str] = ()
+) -> list[str] | list[tuple[str, str]]:
+ """Pulls all flashed messages from the session and returns them.
+ Further calls in the same request to the function will return
+ the same messages. By default just the messages are returned,
+ but when `with_categories` is set to ``True``, the return value will
+ be a list of tuples in the form ``(category, message)`` instead.
+
+ Filter the flashed messages to one or more categories by providing those
+ categories in `category_filter`. This allows rendering categories in
+ separate html blocks. The `with_categories` and `category_filter`
+ arguments are distinct:
+
+ * `with_categories` controls whether categories are returned with message
+ text (``True`` gives a tuple, where ``False`` gives just the message text).
+ * `category_filter` filters the messages down to only those matching the
+ provided categories.
+
+ See :doc:`/patterns/flashing` for examples.
+
+ .. versionchanged:: 0.3
+ `with_categories` parameter added.
+
+ .. versionchanged:: 0.9
+ `category_filter` parameter added.
+
+ :param with_categories: set to ``True`` to also receive categories.
+ :param category_filter: filter of categories to limit return values. Only
+ categories in the list will be returned.
+ """
+ flashes = request_ctx.flashes
+ if flashes is None:
+ flashes = session.pop("_flashes") if "_flashes" in session else []
+ request_ctx.flashes = flashes
+ if category_filter:
+ flashes = list(filter(lambda f: f[0] in category_filter, flashes))
+ if not with_categories:
+ return [x[1] for x in flashes]
+ return flashes
+
+
+def _prepare_send_file_kwargs(**kwargs: t.Any) -> dict[str, t.Any]:
+ if kwargs.get("max_age") is None:
+ kwargs["max_age"] = current_app.get_send_file_max_age
+
+ kwargs.update(
+ environ=request.environ,
+ use_x_sendfile=current_app.config["USE_X_SENDFILE"],
+ response_class=current_app.response_class,
+ _root_path=current_app.root_path, # type: ignore
+ )
+ return kwargs
+
+
+def send_file(
+ path_or_file: os.PathLike[t.AnyStr] | str | t.BinaryIO,
+ mimetype: str | None = None,
+ as_attachment: bool = False,
+ download_name: str | None = None,
+ conditional: bool = True,
+ etag: bool | str = True,
+ last_modified: datetime | int | float | None = None,
+ max_age: None | (int | t.Callable[[str | None], int | None]) = None,
+) -> Response:
+ """Send the contents of a file to the client.
+
+ The first argument can be a file path or a file-like object. Paths
+ are preferred in most cases because Werkzeug can manage the file and
+ get extra information from the path. Passing a file-like object
+ requires that the file is opened in binary mode, and is mostly
+ useful when building a file in memory with :class:`io.BytesIO`.
+
+ Never pass file paths provided by a user. The path is assumed to be
+ trusted, so a user could craft a path to access a file you didn't
+ intend. Use :func:`send_from_directory` to safely serve
+ user-requested paths from within a directory.
+
+ If the WSGI server sets a ``file_wrapper`` in ``environ``, it is
+ used, otherwise Werkzeug's built-in wrapper is used. Alternatively,
+ if the HTTP server supports ``X-Sendfile``, configuring Flask with
+ ``USE_X_SENDFILE = True`` will tell the server to send the given
+ path, which is much more efficient than reading it in Python.
+
+ :param path_or_file: The path to the file to send, relative to the
+ current working directory if a relative path is given.
+ Alternatively, a file-like object opened in binary mode. Make
+ sure the file pointer is seeked to the start of the data.
+ :param mimetype: The MIME type to send for the file. If not
+ provided, it will try to detect it from the file name.
+ :param as_attachment: Indicate to a browser that it should offer to
+ save the file instead of displaying it.
+ :param download_name: The default name browsers will use when saving
+ the file. Defaults to the passed file name.
+ :param conditional: Enable conditional and range responses based on
+ request headers. Requires passing a file path and ``environ``.
+ :param etag: Calculate an ETag for the file, which requires passing
+ a file path. Can also be a string to use instead.
+ :param last_modified: The last modified time to send for the file,
+ in seconds. If not provided, it will try to detect it from the
+ file path.
+ :param max_age: How long the client should cache the file, in
+ seconds. If set, ``Cache-Control`` will be ``public``, otherwise
+ it will be ``no-cache`` to prefer conditional caching.
+
+ .. versionchanged:: 2.0
+ ``download_name`` replaces the ``attachment_filename``
+ parameter. If ``as_attachment=False``, it is passed with
+ ``Content-Disposition: inline`` instead.
+
+ .. versionchanged:: 2.0
+ ``max_age`` replaces the ``cache_timeout`` parameter.
+ ``conditional`` is enabled and ``max_age`` is not set by
+ default.
+
+ .. versionchanged:: 2.0
+ ``etag`` replaces the ``add_etags`` parameter. It can be a
+ string to use instead of generating one.
+
+ .. versionchanged:: 2.0
+ Passing a file-like object that inherits from
+ :class:`~io.TextIOBase` will raise a :exc:`ValueError` rather
+ than sending an empty file.
+
+ .. versionadded:: 2.0
+ Moved the implementation to Werkzeug. This is now a wrapper to
+ pass some Flask-specific arguments.
+
+ .. versionchanged:: 1.1
+ ``filename`` may be a :class:`~os.PathLike` object.
+
+ .. versionchanged:: 1.1
+ Passing a :class:`~io.BytesIO` object supports range requests.
+
+ .. versionchanged:: 1.0.3
+ Filenames are encoded with ASCII instead of Latin-1 for broader
+ compatibility with WSGI servers.
+
+ .. versionchanged:: 1.0
+ UTF-8 filenames as specified in :rfc:`2231` are supported.
+
+ .. versionchanged:: 0.12
+ The filename is no longer automatically inferred from file
+ objects. If you want to use automatic MIME and etag support,
+ pass a filename via ``filename_or_fp`` or
+ ``attachment_filename``.
+
+ .. versionchanged:: 0.12
+ ``attachment_filename`` is preferred over ``filename`` for MIME
+ detection.
+
+ .. versionchanged:: 0.9
+ ``cache_timeout`` defaults to
+ :meth:`Flask.get_send_file_max_age`.
+
+ .. versionchanged:: 0.7
+ MIME guessing and etag support for file-like objects was
+ removed because it was unreliable. Pass a filename if you are
+ able to, otherwise attach an etag yourself.
+
+ .. versionchanged:: 0.5
+ The ``add_etags``, ``cache_timeout`` and ``conditional``
+ parameters were added. The default behavior is to add etags.
+
+ .. versionadded:: 0.2
+ """
+ return werkzeug.utils.send_file( # type: ignore[return-value]
+ **_prepare_send_file_kwargs(
+ path_or_file=path_or_file,
+ environ=request.environ,
+ mimetype=mimetype,
+ as_attachment=as_attachment,
+ download_name=download_name,
+ conditional=conditional,
+ etag=etag,
+ last_modified=last_modified,
+ max_age=max_age,
+ )
+ )
+
+
+def send_from_directory(
+ directory: os.PathLike[str] | str,
+ path: os.PathLike[str] | str,
+ **kwargs: t.Any,
+) -> Response:
+ """Send a file from within a directory using :func:`send_file`.
+
+ .. code-block:: python
+
+ @app.route("/uploads/")
+ def download_file(name):
+ return send_from_directory(
+ app.config['UPLOAD_FOLDER'], name, as_attachment=True
+ )
+
+ This is a secure way to serve files from a folder, such as static
+ files or uploads. Uses :func:`~werkzeug.security.safe_join` to
+ ensure the path coming from the client is not maliciously crafted to
+ point outside the specified directory.
+
+ If the final path does not point to an existing regular file,
+ raises a 404 :exc:`~werkzeug.exceptions.NotFound` error.
+
+ :param directory: The directory that ``path`` must be located under,
+ relative to the current application's root path.
+ :param path: The path to the file to send, relative to
+ ``directory``.
+ :param kwargs: Arguments to pass to :func:`send_file`.
+
+ .. versionchanged:: 2.0
+ ``path`` replaces the ``filename`` parameter.
+
+ .. versionadded:: 2.0
+ Moved the implementation to Werkzeug. This is now a wrapper to
+ pass some Flask-specific arguments.
+
+ .. versionadded:: 0.5
+ """
+ return werkzeug.utils.send_from_directory( # type: ignore[return-value]
+ directory, path, **_prepare_send_file_kwargs(**kwargs)
+ )
+
+
+def get_root_path(import_name: str) -> str:
+ """Find the root path of a package, or the path that contains a
+ module. If it cannot be found, returns the current working
+ directory.
+
+ Not to be confused with the value returned by :func:`find_package`.
+
+ :meta private:
+ """
+ # Module already imported and has a file attribute. Use that first.
+ mod = sys.modules.get(import_name)
+
+ if mod is not None and hasattr(mod, "__file__") and mod.__file__ is not None:
+ return os.path.dirname(os.path.abspath(mod.__file__))
+
+ # Next attempt: check the loader.
+ try:
+ spec = importlib.util.find_spec(import_name)
+
+ if spec is None:
+ raise ValueError
+ except (ImportError, ValueError):
+ loader = None
+ else:
+ loader = spec.loader
+
+ # Loader does not exist or we're referring to an unloaded main
+ # module or a main module without path (interactive sessions), go
+ # with the current working directory.
+ if loader is None:
+ return os.getcwd()
+
+ if hasattr(loader, "get_filename"):
+ filepath = loader.get_filename(import_name)
+ else:
+ # Fall back to imports.
+ __import__(import_name)
+ mod = sys.modules[import_name]
+ filepath = getattr(mod, "__file__", None)
+
+ # If we don't have a file path it might be because it is a
+ # namespace package. In this case pick the root path from the
+ # first module that is contained in the package.
+ if filepath is None:
+ raise RuntimeError(
+ "No root path can be found for the provided module"
+ f" {import_name!r}. This can happen because the module"
+ " came from an import hook that does not provide file"
+ " name information or because it's a namespace package."
+ " In this case the root path needs to be explicitly"
+ " provided."
+ )
+
+ # filepath is import_name.py for a module, or __init__.py for a package.
+ return os.path.dirname(os.path.abspath(filepath)) # type: ignore[no-any-return]
+
+
+@lru_cache(maxsize=None)
+def _split_blueprint_path(name: str) -> list[str]:
+ out: list[str] = [name]
+
+ if "." in name:
+ out.extend(_split_blueprint_path(name.rpartition(".")[0]))
+
+ return out
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__init__.py
new file mode 100644
index 0000000..c0941d0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__init__.py
@@ -0,0 +1,170 @@
+from __future__ import annotations
+
+import json as _json
+import typing as t
+
+from ..globals import current_app
+from .provider import _default
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from ..wrappers import Response
+
+
+def dumps(obj: t.Any, **kwargs: t.Any) -> str:
+ """Serialize data as JSON.
+
+ If :data:`~flask.current_app` is available, it will use its
+ :meth:`app.json.dumps() `
+ method, otherwise it will use :func:`json.dumps`.
+
+ :param obj: The data to serialize.
+ :param kwargs: Arguments passed to the ``dumps`` implementation.
+
+ .. versionchanged:: 2.3
+ The ``app`` parameter was removed.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.json.dumps``, allowing an app to override
+ the behavior.
+
+ .. versionchanged:: 2.0.2
+ :class:`decimal.Decimal` is supported by converting to a string.
+
+ .. versionchanged:: 2.0
+ ``encoding`` will be removed in Flask 2.1.
+
+ .. versionchanged:: 1.0.3
+ ``app`` can be passed directly, rather than requiring an app
+ context for configuration.
+ """
+ if current_app:
+ return current_app.json.dumps(obj, **kwargs)
+
+ kwargs.setdefault("default", _default)
+ return _json.dumps(obj, **kwargs)
+
+
+def dump(obj: t.Any, fp: t.IO[str], **kwargs: t.Any) -> None:
+ """Serialize data as JSON and write to a file.
+
+ If :data:`~flask.current_app` is available, it will use its
+ :meth:`app.json.dump() `
+ method, otherwise it will use :func:`json.dump`.
+
+ :param obj: The data to serialize.
+ :param fp: A file opened for writing text. Should use the UTF-8
+ encoding to be valid JSON.
+ :param kwargs: Arguments passed to the ``dump`` implementation.
+
+ .. versionchanged:: 2.3
+ The ``app`` parameter was removed.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.json.dump``, allowing an app to override
+ the behavior.
+
+ .. versionchanged:: 2.0
+ Writing to a binary file, and the ``encoding`` argument, will be
+ removed in Flask 2.1.
+ """
+ if current_app:
+ current_app.json.dump(obj, fp, **kwargs)
+ else:
+ kwargs.setdefault("default", _default)
+ _json.dump(obj, fp, **kwargs)
+
+
+def loads(s: str | bytes, **kwargs: t.Any) -> t.Any:
+ """Deserialize data as JSON.
+
+ If :data:`~flask.current_app` is available, it will use its
+ :meth:`app.json.loads() `
+ method, otherwise it will use :func:`json.loads`.
+
+ :param s: Text or UTF-8 bytes.
+ :param kwargs: Arguments passed to the ``loads`` implementation.
+
+ .. versionchanged:: 2.3
+ The ``app`` parameter was removed.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.json.loads``, allowing an app to override
+ the behavior.
+
+ .. versionchanged:: 2.0
+ ``encoding`` will be removed in Flask 2.1. The data must be a
+ string or UTF-8 bytes.
+
+ .. versionchanged:: 1.0.3
+ ``app`` can be passed directly, rather than requiring an app
+ context for configuration.
+ """
+ if current_app:
+ return current_app.json.loads(s, **kwargs)
+
+ return _json.loads(s, **kwargs)
+
+
+def load(fp: t.IO[t.AnyStr], **kwargs: t.Any) -> t.Any:
+ """Deserialize data as JSON read from a file.
+
+ If :data:`~flask.current_app` is available, it will use its
+ :meth:`app.json.load() `
+ method, otherwise it will use :func:`json.load`.
+
+ :param fp: A file opened for reading text or UTF-8 bytes.
+ :param kwargs: Arguments passed to the ``load`` implementation.
+
+ .. versionchanged:: 2.3
+ The ``app`` parameter was removed.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.json.load``, allowing an app to override
+ the behavior.
+
+ .. versionchanged:: 2.2
+ The ``app`` parameter will be removed in Flask 2.3.
+
+ .. versionchanged:: 2.0
+ ``encoding`` will be removed in Flask 2.1. The file must be text
+ mode, or binary mode with UTF-8 bytes.
+ """
+ if current_app:
+ return current_app.json.load(fp, **kwargs)
+
+ return _json.load(fp, **kwargs)
+
+
+def jsonify(*args: t.Any, **kwargs: t.Any) -> Response:
+ """Serialize the given arguments as JSON, and return a
+ :class:`~flask.Response` object with the ``application/json``
+ mimetype. A dict or list returned from a view will be converted to a
+ JSON response automatically without needing to call this.
+
+ This requires an active request or application context, and calls
+ :meth:`app.json.response() `.
+
+ In debug mode, the output is formatted with indentation to make it
+ easier to read. This may also be controlled by the provider.
+
+ Either positional or keyword arguments can be given, not both.
+ If no arguments are given, ``None`` is serialized.
+
+ :param args: A single value to serialize, or multiple values to
+ treat as a list to serialize.
+ :param kwargs: Treat as a dict to serialize.
+
+ .. versionchanged:: 2.2
+ Calls ``current_app.json.response``, allowing an app to override
+ the behavior.
+
+ .. versionchanged:: 2.0.2
+ :class:`decimal.Decimal` is supported by converting to a string.
+
+ .. versionchanged:: 0.11
+ Added support for serializing top-level arrays. This was a
+ security risk in ancient browsers. See :ref:`security-json`.
+
+ .. versionadded:: 0.2
+ """
+ return current_app.json.response(*args, **kwargs) # type: ignore[return-value]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..6aecd5d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/provider.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/provider.cpython-312.pyc
new file mode 100644
index 0000000..f214e98
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/provider.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/tag.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/tag.cpython-312.pyc
new file mode 100644
index 0000000..4e6d253
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/__pycache__/tag.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/provider.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/provider.py
new file mode 100644
index 0000000..f9b2e8f
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/provider.py
@@ -0,0 +1,215 @@
+from __future__ import annotations
+
+import dataclasses
+import decimal
+import json
+import typing as t
+import uuid
+import weakref
+from datetime import date
+
+from werkzeug.http import http_date
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from werkzeug.sansio.response import Response
+
+ from ..sansio.app import App
+
+
+class JSONProvider:
+ """A standard set of JSON operations for an application. Subclasses
+ of this can be used to customize JSON behavior or use different
+ JSON libraries.
+
+ To implement a provider for a specific library, subclass this base
+ class and implement at least :meth:`dumps` and :meth:`loads`. All
+ other methods have default implementations.
+
+ To use a different provider, either subclass ``Flask`` and set
+ :attr:`~flask.Flask.json_provider_class` to a provider class, or set
+ :attr:`app.json ` to an instance of the class.
+
+ :param app: An application instance. This will be stored as a
+ :class:`weakref.proxy` on the :attr:`_app` attribute.
+
+ .. versionadded:: 2.2
+ """
+
+ def __init__(self, app: App) -> None:
+ self._app: App = weakref.proxy(app)
+
+ def dumps(self, obj: t.Any, **kwargs: t.Any) -> str:
+ """Serialize data as JSON.
+
+ :param obj: The data to serialize.
+ :param kwargs: May be passed to the underlying JSON library.
+ """
+ raise NotImplementedError
+
+ def dump(self, obj: t.Any, fp: t.IO[str], **kwargs: t.Any) -> None:
+ """Serialize data as JSON and write to a file.
+
+ :param obj: The data to serialize.
+ :param fp: A file opened for writing text. Should use the UTF-8
+ encoding to be valid JSON.
+ :param kwargs: May be passed to the underlying JSON library.
+ """
+ fp.write(self.dumps(obj, **kwargs))
+
+ def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any:
+ """Deserialize data as JSON.
+
+ :param s: Text or UTF-8 bytes.
+ :param kwargs: May be passed to the underlying JSON library.
+ """
+ raise NotImplementedError
+
+ def load(self, fp: t.IO[t.AnyStr], **kwargs: t.Any) -> t.Any:
+ """Deserialize data as JSON read from a file.
+
+ :param fp: A file opened for reading text or UTF-8 bytes.
+ :param kwargs: May be passed to the underlying JSON library.
+ """
+ return self.loads(fp.read(), **kwargs)
+
+ def _prepare_response_obj(
+ self, args: tuple[t.Any, ...], kwargs: dict[str, t.Any]
+ ) -> t.Any:
+ if args and kwargs:
+ raise TypeError("app.json.response() takes either args or kwargs, not both")
+
+ if not args and not kwargs:
+ return None
+
+ if len(args) == 1:
+ return args[0]
+
+ return args or kwargs
+
+ def response(self, *args: t.Any, **kwargs: t.Any) -> Response:
+ """Serialize the given arguments as JSON, and return a
+ :class:`~flask.Response` object with the ``application/json``
+ mimetype.
+
+ The :func:`~flask.json.jsonify` function calls this method for
+ the current application.
+
+ Either positional or keyword arguments can be given, not both.
+ If no arguments are given, ``None`` is serialized.
+
+ :param args: A single value to serialize, or multiple values to
+ treat as a list to serialize.
+ :param kwargs: Treat as a dict to serialize.
+ """
+ obj = self._prepare_response_obj(args, kwargs)
+ return self._app.response_class(self.dumps(obj), mimetype="application/json")
+
+
+def _default(o: t.Any) -> t.Any:
+ if isinstance(o, date):
+ return http_date(o)
+
+ if isinstance(o, (decimal.Decimal, uuid.UUID)):
+ return str(o)
+
+ if dataclasses and dataclasses.is_dataclass(o):
+ return dataclasses.asdict(o)
+
+ if hasattr(o, "__html__"):
+ return str(o.__html__())
+
+ raise TypeError(f"Object of type {type(o).__name__} is not JSON serializable")
+
+
+class DefaultJSONProvider(JSONProvider):
+ """Provide JSON operations using Python's built-in :mod:`json`
+ library. Serializes the following additional data types:
+
+ - :class:`datetime.datetime` and :class:`datetime.date` are
+ serialized to :rfc:`822` strings. This is the same as the HTTP
+ date format.
+ - :class:`uuid.UUID` is serialized to a string.
+ - :class:`dataclasses.dataclass` is passed to
+ :func:`dataclasses.asdict`.
+ - :class:`~markupsafe.Markup` (or any object with a ``__html__``
+ method) will call the ``__html__`` method to get a string.
+ """
+
+ default: t.Callable[[t.Any], t.Any] = staticmethod(_default) # type: ignore[assignment]
+ """Apply this function to any object that :meth:`json.dumps` does
+ not know how to serialize. It should return a valid JSON type or
+ raise a ``TypeError``.
+ """
+
+ ensure_ascii = True
+ """Replace non-ASCII characters with escape sequences. This may be
+ more compatible with some clients, but can be disabled for better
+ performance and size.
+ """
+
+ sort_keys = True
+ """Sort the keys in any serialized dicts. This may be useful for
+ some caching situations, but can be disabled for better performance.
+ When enabled, keys must all be strings, they are not converted
+ before sorting.
+ """
+
+ compact: bool | None = None
+ """If ``True``, or ``None`` out of debug mode, the :meth:`response`
+ output will not add indentation, newlines, or spaces. If ``False``,
+ or ``None`` in debug mode, it will use a non-compact representation.
+ """
+
+ mimetype = "application/json"
+ """The mimetype set in :meth:`response`."""
+
+ def dumps(self, obj: t.Any, **kwargs: t.Any) -> str:
+ """Serialize data as JSON to a string.
+
+ Keyword arguments are passed to :func:`json.dumps`. Sets some
+ parameter defaults from the :attr:`default`,
+ :attr:`ensure_ascii`, and :attr:`sort_keys` attributes.
+
+ :param obj: The data to serialize.
+ :param kwargs: Passed to :func:`json.dumps`.
+ """
+ kwargs.setdefault("default", self.default)
+ kwargs.setdefault("ensure_ascii", self.ensure_ascii)
+ kwargs.setdefault("sort_keys", self.sort_keys)
+ return json.dumps(obj, **kwargs)
+
+ def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any:
+ """Deserialize data as JSON from a string or bytes.
+
+ :param s: Text or UTF-8 bytes.
+ :param kwargs: Passed to :func:`json.loads`.
+ """
+ return json.loads(s, **kwargs)
+
+ def response(self, *args: t.Any, **kwargs: t.Any) -> Response:
+ """Serialize the given arguments as JSON, and return a
+ :class:`~flask.Response` object with it. The response mimetype
+ will be "application/json" and can be changed with
+ :attr:`mimetype`.
+
+ If :attr:`compact` is ``False`` or debug mode is enabled, the
+ output will be formatted to be easier to read.
+
+ Either positional or keyword arguments can be given, not both.
+ If no arguments are given, ``None`` is serialized.
+
+ :param args: A single value to serialize, or multiple values to
+ treat as a list to serialize.
+ :param kwargs: Treat as a dict to serialize.
+ """
+ obj = self._prepare_response_obj(args, kwargs)
+ dump_args: dict[str, t.Any] = {}
+
+ if (self.compact is None and self._app.debug) or self.compact is False:
+ dump_args.setdefault("indent", 2)
+ else:
+ dump_args.setdefault("separators", (",", ":"))
+
+ return self._app.response_class(
+ f"{self.dumps(obj, **dump_args)}\n", mimetype=self.mimetype
+ )
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/json/tag.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/tag.py
new file mode 100644
index 0000000..8dc3629
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/json/tag.py
@@ -0,0 +1,327 @@
+"""
+Tagged JSON
+~~~~~~~~~~~
+
+A compact representation for lossless serialization of non-standard JSON
+types. :class:`~flask.sessions.SecureCookieSessionInterface` uses this
+to serialize the session data, but it may be useful in other places. It
+can be extended to support other types.
+
+.. autoclass:: TaggedJSONSerializer
+ :members:
+
+.. autoclass:: JSONTag
+ :members:
+
+Let's see an example that adds support for
+:class:`~collections.OrderedDict`. Dicts don't have an order in JSON, so
+to handle this we will dump the items as a list of ``[key, value]``
+pairs. Subclass :class:`JSONTag` and give it the new key ``' od'`` to
+identify the type. The session serializer processes dicts first, so
+insert the new tag at the front of the order since ``OrderedDict`` must
+be processed before ``dict``.
+
+.. code-block:: python
+
+ from flask.json.tag import JSONTag
+
+ class TagOrderedDict(JSONTag):
+ __slots__ = ('serializer',)
+ key = ' od'
+
+ def check(self, value):
+ return isinstance(value, OrderedDict)
+
+ def to_json(self, value):
+ return [[k, self.serializer.tag(v)] for k, v in iteritems(value)]
+
+ def to_python(self, value):
+ return OrderedDict(value)
+
+ app.session_interface.serializer.register(TagOrderedDict, index=0)
+"""
+
+from __future__ import annotations
+
+import typing as t
+from base64 import b64decode
+from base64 import b64encode
+from datetime import datetime
+from uuid import UUID
+
+from markupsafe import Markup
+from werkzeug.http import http_date
+from werkzeug.http import parse_date
+
+from ..json import dumps
+from ..json import loads
+
+
+class JSONTag:
+ """Base class for defining type tags for :class:`TaggedJSONSerializer`."""
+
+ __slots__ = ("serializer",)
+
+ #: The tag to mark the serialized object with. If empty, this tag is
+ #: only used as an intermediate step during tagging.
+ key: str = ""
+
+ def __init__(self, serializer: TaggedJSONSerializer) -> None:
+ """Create a tagger for the given serializer."""
+ self.serializer = serializer
+
+ def check(self, value: t.Any) -> bool:
+ """Check if the given value should be tagged by this tag."""
+ raise NotImplementedError
+
+ def to_json(self, value: t.Any) -> t.Any:
+ """Convert the Python object to an object that is a valid JSON type.
+ The tag will be added later."""
+ raise NotImplementedError
+
+ def to_python(self, value: t.Any) -> t.Any:
+ """Convert the JSON representation back to the correct type. The tag
+ will already be removed."""
+ raise NotImplementedError
+
+ def tag(self, value: t.Any) -> dict[str, t.Any]:
+ """Convert the value to a valid JSON type and add the tag structure
+ around it."""
+ return {self.key: self.to_json(value)}
+
+
+class TagDict(JSONTag):
+ """Tag for 1-item dicts whose only key matches a registered tag.
+
+ Internally, the dict key is suffixed with `__`, and the suffix is removed
+ when deserializing.
+ """
+
+ __slots__ = ()
+ key = " di"
+
+ def check(self, value: t.Any) -> bool:
+ return (
+ isinstance(value, dict)
+ and len(value) == 1
+ and next(iter(value)) in self.serializer.tags
+ )
+
+ def to_json(self, value: t.Any) -> t.Any:
+ key = next(iter(value))
+ return {f"{key}__": self.serializer.tag(value[key])}
+
+ def to_python(self, value: t.Any) -> t.Any:
+ key = next(iter(value))
+ return {key[:-2]: value[key]}
+
+
+class PassDict(JSONTag):
+ __slots__ = ()
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, dict)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ # JSON objects may only have string keys, so don't bother tagging the
+ # key here.
+ return {k: self.serializer.tag(v) for k, v in value.items()}
+
+ tag = to_json
+
+
+class TagTuple(JSONTag):
+ __slots__ = ()
+ key = " t"
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, tuple)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return [self.serializer.tag(item) for item in value]
+
+ def to_python(self, value: t.Any) -> t.Any:
+ return tuple(value)
+
+
+class PassList(JSONTag):
+ __slots__ = ()
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, list)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return [self.serializer.tag(item) for item in value]
+
+ tag = to_json
+
+
+class TagBytes(JSONTag):
+ __slots__ = ()
+ key = " b"
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, bytes)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return b64encode(value).decode("ascii")
+
+ def to_python(self, value: t.Any) -> t.Any:
+ return b64decode(value)
+
+
+class TagMarkup(JSONTag):
+ """Serialize anything matching the :class:`~markupsafe.Markup` API by
+ having a ``__html__`` method to the result of that method. Always
+ deserializes to an instance of :class:`~markupsafe.Markup`."""
+
+ __slots__ = ()
+ key = " m"
+
+ def check(self, value: t.Any) -> bool:
+ return callable(getattr(value, "__html__", None))
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return str(value.__html__())
+
+ def to_python(self, value: t.Any) -> t.Any:
+ return Markup(value)
+
+
+class TagUUID(JSONTag):
+ __slots__ = ()
+ key = " u"
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, UUID)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return value.hex
+
+ def to_python(self, value: t.Any) -> t.Any:
+ return UUID(value)
+
+
+class TagDateTime(JSONTag):
+ __slots__ = ()
+ key = " d"
+
+ def check(self, value: t.Any) -> bool:
+ return isinstance(value, datetime)
+
+ def to_json(self, value: t.Any) -> t.Any:
+ return http_date(value)
+
+ def to_python(self, value: t.Any) -> t.Any:
+ return parse_date(value)
+
+
+class TaggedJSONSerializer:
+ """Serializer that uses a tag system to compactly represent objects that
+ are not JSON types. Passed as the intermediate serializer to
+ :class:`itsdangerous.Serializer`.
+
+ The following extra types are supported:
+
+ * :class:`dict`
+ * :class:`tuple`
+ * :class:`bytes`
+ * :class:`~markupsafe.Markup`
+ * :class:`~uuid.UUID`
+ * :class:`~datetime.datetime`
+ """
+
+ __slots__ = ("tags", "order")
+
+ #: Tag classes to bind when creating the serializer. Other tags can be
+ #: added later using :meth:`~register`.
+ default_tags = [
+ TagDict,
+ PassDict,
+ TagTuple,
+ PassList,
+ TagBytes,
+ TagMarkup,
+ TagUUID,
+ TagDateTime,
+ ]
+
+ def __init__(self) -> None:
+ self.tags: dict[str, JSONTag] = {}
+ self.order: list[JSONTag] = []
+
+ for cls in self.default_tags:
+ self.register(cls)
+
+ def register(
+ self,
+ tag_class: type[JSONTag],
+ force: bool = False,
+ index: int | None = None,
+ ) -> None:
+ """Register a new tag with this serializer.
+
+ :param tag_class: tag class to register. Will be instantiated with this
+ serializer instance.
+ :param force: overwrite an existing tag. If false (default), a
+ :exc:`KeyError` is raised.
+ :param index: index to insert the new tag in the tag order. Useful when
+ the new tag is a special case of an existing tag. If ``None``
+ (default), the tag is appended to the end of the order.
+
+ :raise KeyError: if the tag key is already registered and ``force`` is
+ not true.
+ """
+ tag = tag_class(self)
+ key = tag.key
+
+ if key:
+ if not force and key in self.tags:
+ raise KeyError(f"Tag '{key}' is already registered.")
+
+ self.tags[key] = tag
+
+ if index is None:
+ self.order.append(tag)
+ else:
+ self.order.insert(index, tag)
+
+ def tag(self, value: t.Any) -> t.Any:
+ """Convert a value to a tagged representation if necessary."""
+ for tag in self.order:
+ if tag.check(value):
+ return tag.tag(value)
+
+ return value
+
+ def untag(self, value: dict[str, t.Any]) -> t.Any:
+ """Convert a tagged representation back to the original type."""
+ if len(value) != 1:
+ return value
+
+ key = next(iter(value))
+
+ if key not in self.tags:
+ return value
+
+ return self.tags[key].to_python(value[key])
+
+ def _untag_scan(self, value: t.Any) -> t.Any:
+ if isinstance(value, dict):
+ # untag each item recursively
+ value = {k: self._untag_scan(v) for k, v in value.items()}
+ # untag the dict itself
+ value = self.untag(value)
+ elif isinstance(value, list):
+ # untag each item recursively
+ value = [self._untag_scan(item) for item in value]
+
+ return value
+
+ def dumps(self, value: t.Any) -> str:
+ """Tag the value and dump it to a compact JSON string."""
+ return dumps(self.tag(value), separators=(",", ":"))
+
+ def loads(self, value: str) -> t.Any:
+ """Load data from a JSON string and deserialized any tagged objects."""
+ return self._untag_scan(loads(value))
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/logging.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/logging.py
new file mode 100644
index 0000000..0cb8f43
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/logging.py
@@ -0,0 +1,79 @@
+from __future__ import annotations
+
+import logging
+import sys
+import typing as t
+
+from werkzeug.local import LocalProxy
+
+from .globals import request
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .sansio.app import App
+
+
+@LocalProxy
+def wsgi_errors_stream() -> t.TextIO:
+ """Find the most appropriate error stream for the application. If a request
+ is active, log to ``wsgi.errors``, otherwise use ``sys.stderr``.
+
+ If you configure your own :class:`logging.StreamHandler`, you may want to
+ use this for the stream. If you are using file or dict configuration and
+ can't import this directly, you can refer to it as
+ ``ext://flask.logging.wsgi_errors_stream``.
+ """
+ if request:
+ return request.environ["wsgi.errors"] # type: ignore[no-any-return]
+
+ return sys.stderr
+
+
+def has_level_handler(logger: logging.Logger) -> bool:
+ """Check if there is a handler in the logging chain that will handle the
+ given logger's :meth:`effective level <~logging.Logger.getEffectiveLevel>`.
+ """
+ level = logger.getEffectiveLevel()
+ current = logger
+
+ while current:
+ if any(handler.level <= level for handler in current.handlers):
+ return True
+
+ if not current.propagate:
+ break
+
+ current = current.parent # type: ignore
+
+ return False
+
+
+#: Log messages to :func:`~flask.logging.wsgi_errors_stream` with the format
+#: ``[%(asctime)s] %(levelname)s in %(module)s: %(message)s``.
+default_handler = logging.StreamHandler(wsgi_errors_stream) # type: ignore
+default_handler.setFormatter(
+ logging.Formatter("[%(asctime)s] %(levelname)s in %(module)s: %(message)s")
+)
+
+
+def create_logger(app: App) -> logging.Logger:
+ """Get the Flask app's logger and configure it if needed.
+
+ The logger name will be the same as
+ :attr:`app.import_name `.
+
+ When :attr:`~flask.Flask.debug` is enabled, set the logger level to
+ :data:`logging.DEBUG` if it is not set.
+
+ If there is no handler for the logger's effective level, add a
+ :class:`~logging.StreamHandler` for
+ :func:`~flask.logging.wsgi_errors_stream` with a basic format.
+ """
+ logger = logging.getLogger(app.name)
+
+ if app.debug and not logger.level:
+ logger.setLevel(logging.DEBUG)
+
+ if not has_level_handler(logger):
+ logger.addHandler(default_handler)
+
+ return logger
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/flask/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/README.md b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/README.md
new file mode 100644
index 0000000..623ac19
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/README.md
@@ -0,0 +1,6 @@
+# Sansio
+
+This folder contains code that can be used by alternative Flask
+implementations, for example Quart. The code therefore cannot do any
+IO, nor be part of a likely IO path. Finally this code cannot use the
+Flask globals.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/app.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/app.cpython-312.pyc
new file mode 100644
index 0000000..b8c88e3
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/app.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/blueprints.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/blueprints.cpython-312.pyc
new file mode 100644
index 0000000..965c0bd
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/blueprints.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/scaffold.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/scaffold.cpython-312.pyc
new file mode 100644
index 0000000..4018777
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/__pycache__/scaffold.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/app.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/app.py
new file mode 100644
index 0000000..01fd5db
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/app.py
@@ -0,0 +1,964 @@
+from __future__ import annotations
+
+import logging
+import os
+import sys
+import typing as t
+from datetime import timedelta
+from itertools import chain
+
+from werkzeug.exceptions import Aborter
+from werkzeug.exceptions import BadRequest
+from werkzeug.exceptions import BadRequestKeyError
+from werkzeug.routing import BuildError
+from werkzeug.routing import Map
+from werkzeug.routing import Rule
+from werkzeug.sansio.response import Response
+from werkzeug.utils import cached_property
+from werkzeug.utils import redirect as _wz_redirect
+
+from .. import typing as ft
+from ..config import Config
+from ..config import ConfigAttribute
+from ..ctx import _AppCtxGlobals
+from ..helpers import _split_blueprint_path
+from ..helpers import get_debug_flag
+from ..json.provider import DefaultJSONProvider
+from ..json.provider import JSONProvider
+from ..logging import create_logger
+from ..templating import DispatchingJinjaLoader
+from ..templating import Environment
+from .scaffold import _endpoint_from_view_func
+from .scaffold import find_package
+from .scaffold import Scaffold
+from .scaffold import setupmethod
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from werkzeug.wrappers import Response as BaseResponse
+
+ from ..testing import FlaskClient
+ from ..testing import FlaskCliRunner
+ from .blueprints import Blueprint
+
+T_shell_context_processor = t.TypeVar(
+ "T_shell_context_processor", bound=ft.ShellContextProcessorCallable
+)
+T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable)
+T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable)
+T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable)
+T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable)
+
+
+def _make_timedelta(value: timedelta | int | None) -> timedelta | None:
+ if value is None or isinstance(value, timedelta):
+ return value
+
+ return timedelta(seconds=value)
+
+
+class App(Scaffold):
+ """The flask object implements a WSGI application and acts as the central
+ object. It is passed the name of the module or package of the
+ application. Once it is created it will act as a central registry for
+ the view functions, the URL rules, template configuration and much more.
+
+ The name of the package is used to resolve resources from inside the
+ package or the folder the module is contained in depending on if the
+ package parameter resolves to an actual python package (a folder with
+ an :file:`__init__.py` file inside) or a standard module (just a ``.py`` file).
+
+ For more information about resource loading, see :func:`open_resource`.
+
+ Usually you create a :class:`Flask` instance in your main module or
+ in the :file:`__init__.py` file of your package like this::
+
+ from flask import Flask
+ app = Flask(__name__)
+
+ .. admonition:: About the First Parameter
+
+ The idea of the first parameter is to give Flask an idea of what
+ belongs to your application. This name is used to find resources
+ on the filesystem, can be used by extensions to improve debugging
+ information and a lot more.
+
+ So it's important what you provide there. If you are using a single
+ module, `__name__` is always the correct value. If you however are
+ using a package, it's usually recommended to hardcode the name of
+ your package there.
+
+ For example if your application is defined in :file:`yourapplication/app.py`
+ you should create it with one of the two versions below::
+
+ app = Flask('yourapplication')
+ app = Flask(__name__.split('.')[0])
+
+ Why is that? The application will work even with `__name__`, thanks
+ to how resources are looked up. However it will make debugging more
+ painful. Certain extensions can make assumptions based on the
+ import name of your application. For example the Flask-SQLAlchemy
+ extension will look for the code in your application that triggered
+ an SQL query in debug mode. If the import name is not properly set
+ up, that debugging information is lost. (For example it would only
+ pick up SQL queries in `yourapplication.app` and not
+ `yourapplication.views.frontend`)
+
+ .. versionadded:: 0.7
+ The `static_url_path`, `static_folder`, and `template_folder`
+ parameters were added.
+
+ .. versionadded:: 0.8
+ The `instance_path` and `instance_relative_config` parameters were
+ added.
+
+ .. versionadded:: 0.11
+ The `root_path` parameter was added.
+
+ .. versionadded:: 1.0
+ The ``host_matching`` and ``static_host`` parameters were added.
+
+ .. versionadded:: 1.0
+ The ``subdomain_matching`` parameter was added. Subdomain
+ matching needs to be enabled manually now. Setting
+ :data:`SERVER_NAME` does not implicitly enable it.
+
+ :param import_name: the name of the application package
+ :param static_url_path: can be used to specify a different path for the
+ static files on the web. Defaults to the name
+ of the `static_folder` folder.
+ :param static_folder: The folder with static files that is served at
+ ``static_url_path``. Relative to the application ``root_path``
+ or an absolute path. Defaults to ``'static'``.
+ :param static_host: the host to use when adding the static route.
+ Defaults to None. Required when using ``host_matching=True``
+ with a ``static_folder`` configured.
+ :param host_matching: set ``url_map.host_matching`` attribute.
+ Defaults to False.
+ :param subdomain_matching: consider the subdomain relative to
+ :data:`SERVER_NAME` when matching routes. Defaults to False.
+ :param template_folder: the folder that contains the templates that should
+ be used by the application. Defaults to
+ ``'templates'`` folder in the root path of the
+ application.
+ :param instance_path: An alternative instance path for the application.
+ By default the folder ``'instance'`` next to the
+ package or module is assumed to be the instance
+ path.
+ :param instance_relative_config: if set to ``True`` relative filenames
+ for loading the config are assumed to
+ be relative to the instance path instead
+ of the application root.
+ :param root_path: The path to the root of the application files.
+ This should only be set manually when it can't be detected
+ automatically, such as for namespace packages.
+ """
+
+ #: The class of the object assigned to :attr:`aborter`, created by
+ #: :meth:`create_aborter`. That object is called by
+ #: :func:`flask.abort` to raise HTTP errors, and can be
+ #: called directly as well.
+ #:
+ #: Defaults to :class:`werkzeug.exceptions.Aborter`.
+ #:
+ #: .. versionadded:: 2.2
+ aborter_class = Aborter
+
+ #: The class that is used for the Jinja environment.
+ #:
+ #: .. versionadded:: 0.11
+ jinja_environment = Environment
+
+ #: The class that is used for the :data:`~flask.g` instance.
+ #:
+ #: Example use cases for a custom class:
+ #:
+ #: 1. Store arbitrary attributes on flask.g.
+ #: 2. Add a property for lazy per-request database connectors.
+ #: 3. Return None instead of AttributeError on unexpected attributes.
+ #: 4. Raise exception if an unexpected attr is set, a "controlled" flask.g.
+ #:
+ #: In Flask 0.9 this property was called `request_globals_class` but it
+ #: was changed in 0.10 to :attr:`app_ctx_globals_class` because the
+ #: flask.g object is now application context scoped.
+ #:
+ #: .. versionadded:: 0.10
+ app_ctx_globals_class = _AppCtxGlobals
+
+ #: The class that is used for the ``config`` attribute of this app.
+ #: Defaults to :class:`~flask.Config`.
+ #:
+ #: Example use cases for a custom class:
+ #:
+ #: 1. Default values for certain config options.
+ #: 2. Access to config values through attributes in addition to keys.
+ #:
+ #: .. versionadded:: 0.11
+ config_class = Config
+
+ #: The testing flag. Set this to ``True`` to enable the test mode of
+ #: Flask extensions (and in the future probably also Flask itself).
+ #: For example this might activate test helpers that have an
+ #: additional runtime cost which should not be enabled by default.
+ #:
+ #: If this is enabled and PROPAGATE_EXCEPTIONS is not changed from the
+ #: default it's implicitly enabled.
+ #:
+ #: This attribute can also be configured from the config with the
+ #: ``TESTING`` configuration key. Defaults to ``False``.
+ testing = ConfigAttribute[bool]("TESTING")
+
+ #: If a secret key is set, cryptographic components can use this to
+ #: sign cookies and other things. Set this to a complex random value
+ #: when you want to use the secure cookie for instance.
+ #:
+ #: This attribute can also be configured from the config with the
+ #: :data:`SECRET_KEY` configuration key. Defaults to ``None``.
+ secret_key = ConfigAttribute[t.Union[str, bytes, None]]("SECRET_KEY")
+
+ #: A :class:`~datetime.timedelta` which is used to set the expiration
+ #: date of a permanent session. The default is 31 days which makes a
+ #: permanent session survive for roughly one month.
+ #:
+ #: This attribute can also be configured from the config with the
+ #: ``PERMANENT_SESSION_LIFETIME`` configuration key. Defaults to
+ #: ``timedelta(days=31)``
+ permanent_session_lifetime = ConfigAttribute[timedelta](
+ "PERMANENT_SESSION_LIFETIME",
+ get_converter=_make_timedelta, # type: ignore[arg-type]
+ )
+
+ json_provider_class: type[JSONProvider] = DefaultJSONProvider
+ """A subclass of :class:`~flask.json.provider.JSONProvider`. An
+ instance is created and assigned to :attr:`app.json` when creating
+ the app.
+
+ The default, :class:`~flask.json.provider.DefaultJSONProvider`, uses
+ Python's built-in :mod:`json` library. A different provider can use
+ a different JSON library.
+
+ .. versionadded:: 2.2
+ """
+
+ #: Options that are passed to the Jinja environment in
+ #: :meth:`create_jinja_environment`. Changing these options after
+ #: the environment is created (accessing :attr:`jinja_env`) will
+ #: have no effect.
+ #:
+ #: .. versionchanged:: 1.1.0
+ #: This is a ``dict`` instead of an ``ImmutableDict`` to allow
+ #: easier configuration.
+ #:
+ jinja_options: dict[str, t.Any] = {}
+
+ #: The rule object to use for URL rules created. This is used by
+ #: :meth:`add_url_rule`. Defaults to :class:`werkzeug.routing.Rule`.
+ #:
+ #: .. versionadded:: 0.7
+ url_rule_class = Rule
+
+ #: The map object to use for storing the URL rules and routing
+ #: configuration parameters. Defaults to :class:`werkzeug.routing.Map`.
+ #:
+ #: .. versionadded:: 1.1.0
+ url_map_class = Map
+
+ #: The :meth:`test_client` method creates an instance of this test
+ #: client class. Defaults to :class:`~flask.testing.FlaskClient`.
+ #:
+ #: .. versionadded:: 0.7
+ test_client_class: type[FlaskClient] | None = None
+
+ #: The :class:`~click.testing.CliRunner` subclass, by default
+ #: :class:`~flask.testing.FlaskCliRunner` that is used by
+ #: :meth:`test_cli_runner`. Its ``__init__`` method should take a
+ #: Flask app object as the first argument.
+ #:
+ #: .. versionadded:: 1.0
+ test_cli_runner_class: type[FlaskCliRunner] | None = None
+
+ default_config: dict[str, t.Any]
+ response_class: type[Response]
+
+ def __init__(
+ self,
+ import_name: str,
+ static_url_path: str | None = None,
+ static_folder: str | os.PathLike[str] | None = "static",
+ static_host: str | None = None,
+ host_matching: bool = False,
+ subdomain_matching: bool = False,
+ template_folder: str | os.PathLike[str] | None = "templates",
+ instance_path: str | None = None,
+ instance_relative_config: bool = False,
+ root_path: str | None = None,
+ ):
+ super().__init__(
+ import_name=import_name,
+ static_folder=static_folder,
+ static_url_path=static_url_path,
+ template_folder=template_folder,
+ root_path=root_path,
+ )
+
+ if instance_path is None:
+ instance_path = self.auto_find_instance_path()
+ elif not os.path.isabs(instance_path):
+ raise ValueError(
+ "If an instance path is provided it must be absolute."
+ " A relative path was given instead."
+ )
+
+ #: Holds the path to the instance folder.
+ #:
+ #: .. versionadded:: 0.8
+ self.instance_path = instance_path
+
+ #: The configuration dictionary as :class:`Config`. This behaves
+ #: exactly like a regular dictionary but supports additional methods
+ #: to load a config from files.
+ self.config = self.make_config(instance_relative_config)
+
+ #: An instance of :attr:`aborter_class` created by
+ #: :meth:`make_aborter`. This is called by :func:`flask.abort`
+ #: to raise HTTP errors, and can be called directly as well.
+ #:
+ #: .. versionadded:: 2.2
+ #: Moved from ``flask.abort``, which calls this object.
+ self.aborter = self.make_aborter()
+
+ self.json: JSONProvider = self.json_provider_class(self)
+ """Provides access to JSON methods. Functions in ``flask.json``
+ will call methods on this provider when the application context
+ is active. Used for handling JSON requests and responses.
+
+ An instance of :attr:`json_provider_class`. Can be customized by
+ changing that attribute on a subclass, or by assigning to this
+ attribute afterwards.
+
+ The default, :class:`~flask.json.provider.DefaultJSONProvider`,
+ uses Python's built-in :mod:`json` library. A different provider
+ can use a different JSON library.
+
+ .. versionadded:: 2.2
+ """
+
+ #: A list of functions that are called by
+ #: :meth:`handle_url_build_error` when :meth:`.url_for` raises a
+ #: :exc:`~werkzeug.routing.BuildError`. Each function is called
+ #: with ``error``, ``endpoint`` and ``values``. If a function
+ #: returns ``None`` or raises a ``BuildError``, it is skipped.
+ #: Otherwise, its return value is returned by ``url_for``.
+ #:
+ #: .. versionadded:: 0.9
+ self.url_build_error_handlers: list[
+ t.Callable[[Exception, str, dict[str, t.Any]], str]
+ ] = []
+
+ #: A list of functions that are called when the application context
+ #: is destroyed. Since the application context is also torn down
+ #: if the request ends this is the place to store code that disconnects
+ #: from databases.
+ #:
+ #: .. versionadded:: 0.9
+ self.teardown_appcontext_funcs: list[ft.TeardownCallable] = []
+
+ #: A list of shell context processor functions that should be run
+ #: when a shell context is created.
+ #:
+ #: .. versionadded:: 0.11
+ self.shell_context_processors: list[ft.ShellContextProcessorCallable] = []
+
+ #: Maps registered blueprint names to blueprint objects. The
+ #: dict retains the order the blueprints were registered in.
+ #: Blueprints can be registered multiple times, this dict does
+ #: not track how often they were attached.
+ #:
+ #: .. versionadded:: 0.7
+ self.blueprints: dict[str, Blueprint] = {}
+
+ #: a place where extensions can store application specific state. For
+ #: example this is where an extension could store database engines and
+ #: similar things.
+ #:
+ #: The key must match the name of the extension module. For example in
+ #: case of a "Flask-Foo" extension in `flask_foo`, the key would be
+ #: ``'foo'``.
+ #:
+ #: .. versionadded:: 0.7
+ self.extensions: dict[str, t.Any] = {}
+
+ #: The :class:`~werkzeug.routing.Map` for this instance. You can use
+ #: this to change the routing converters after the class was created
+ #: but before any routes are connected. Example::
+ #:
+ #: from werkzeug.routing import BaseConverter
+ #:
+ #: class ListConverter(BaseConverter):
+ #: def to_python(self, value):
+ #: return value.split(',')
+ #: def to_url(self, values):
+ #: return ','.join(super(ListConverter, self).to_url(value)
+ #: for value in values)
+ #:
+ #: app = Flask(__name__)
+ #: app.url_map.converters['list'] = ListConverter
+ self.url_map = self.url_map_class(host_matching=host_matching)
+
+ self.subdomain_matching = subdomain_matching
+
+ # tracks internally if the application already handled at least one
+ # request.
+ self._got_first_request = False
+
+ def _check_setup_finished(self, f_name: str) -> None:
+ if self._got_first_request:
+ raise AssertionError(
+ f"The setup method '{f_name}' can no longer be called"
+ " on the application. It has already handled its first"
+ " request, any changes will not be applied"
+ " consistently.\n"
+ "Make sure all imports, decorators, functions, etc."
+ " needed to set up the application are done before"
+ " running it."
+ )
+
+ @cached_property
+ def name(self) -> str: # type: ignore
+ """The name of the application. This is usually the import name
+ with the difference that it's guessed from the run file if the
+ import name is main. This name is used as a display name when
+ Flask needs the name of the application. It can be set and overridden
+ to change the value.
+
+ .. versionadded:: 0.8
+ """
+ if self.import_name == "__main__":
+ fn: str | None = getattr(sys.modules["__main__"], "__file__", None)
+ if fn is None:
+ return "__main__"
+ return os.path.splitext(os.path.basename(fn))[0]
+ return self.import_name
+
+ @cached_property
+ def logger(self) -> logging.Logger:
+ """A standard Python :class:`~logging.Logger` for the app, with
+ the same name as :attr:`name`.
+
+ In debug mode, the logger's :attr:`~logging.Logger.level` will
+ be set to :data:`~logging.DEBUG`.
+
+ If there are no handlers configured, a default handler will be
+ added. See :doc:`/logging` for more information.
+
+ .. versionchanged:: 1.1.0
+ The logger takes the same name as :attr:`name` rather than
+ hard-coding ``"flask.app"``.
+
+ .. versionchanged:: 1.0.0
+ Behavior was simplified. The logger is always named
+ ``"flask.app"``. The level is only set during configuration,
+ it doesn't check ``app.debug`` each time. Only one format is
+ used, not different ones depending on ``app.debug``. No
+ handlers are removed, and a handler is only added if no
+ handlers are already configured.
+
+ .. versionadded:: 0.3
+ """
+ return create_logger(self)
+
+ @cached_property
+ def jinja_env(self) -> Environment:
+ """The Jinja environment used to load templates.
+
+ The environment is created the first time this property is
+ accessed. Changing :attr:`jinja_options` after that will have no
+ effect.
+ """
+ return self.create_jinja_environment()
+
+ def create_jinja_environment(self) -> Environment:
+ raise NotImplementedError()
+
+ def make_config(self, instance_relative: bool = False) -> Config:
+ """Used to create the config attribute by the Flask constructor.
+ The `instance_relative` parameter is passed in from the constructor
+ of Flask (there named `instance_relative_config`) and indicates if
+ the config should be relative to the instance path or the root path
+ of the application.
+
+ .. versionadded:: 0.8
+ """
+ root_path = self.root_path
+ if instance_relative:
+ root_path = self.instance_path
+ defaults = dict(self.default_config)
+ defaults["DEBUG"] = get_debug_flag()
+ return self.config_class(root_path, defaults)
+
+ def make_aborter(self) -> Aborter:
+ """Create the object to assign to :attr:`aborter`. That object
+ is called by :func:`flask.abort` to raise HTTP errors, and can
+ be called directly as well.
+
+ By default, this creates an instance of :attr:`aborter_class`,
+ which defaults to :class:`werkzeug.exceptions.Aborter`.
+
+ .. versionadded:: 2.2
+ """
+ return self.aborter_class()
+
+ def auto_find_instance_path(self) -> str:
+ """Tries to locate the instance path if it was not provided to the
+ constructor of the application class. It will basically calculate
+ the path to a folder named ``instance`` next to your main file or
+ the package.
+
+ .. versionadded:: 0.8
+ """
+ prefix, package_path = find_package(self.import_name)
+ if prefix is None:
+ return os.path.join(package_path, "instance")
+ return os.path.join(prefix, "var", f"{self.name}-instance")
+
+ def create_global_jinja_loader(self) -> DispatchingJinjaLoader:
+ """Creates the loader for the Jinja2 environment. Can be used to
+ override just the loader and keeping the rest unchanged. It's
+ discouraged to override this function. Instead one should override
+ the :meth:`jinja_loader` function instead.
+
+ The global loader dispatches between the loaders of the application
+ and the individual blueprints.
+
+ .. versionadded:: 0.7
+ """
+ return DispatchingJinjaLoader(self)
+
+ def select_jinja_autoescape(self, filename: str) -> bool:
+ """Returns ``True`` if autoescaping should be active for the given
+ template name. If no template name is given, returns `True`.
+
+ .. versionchanged:: 2.2
+ Autoescaping is now enabled by default for ``.svg`` files.
+
+ .. versionadded:: 0.5
+ """
+ if filename is None:
+ return True
+ return filename.endswith((".html", ".htm", ".xml", ".xhtml", ".svg"))
+
+ @property
+ def debug(self) -> bool:
+ """Whether debug mode is enabled. When using ``flask run`` to start the
+ development server, an interactive debugger will be shown for unhandled
+ exceptions, and the server will be reloaded when code changes. This maps to the
+ :data:`DEBUG` config key. It may not behave as expected if set late.
+
+ **Do not enable debug mode when deploying in production.**
+
+ Default: ``False``
+ """
+ return self.config["DEBUG"] # type: ignore[no-any-return]
+
+ @debug.setter
+ def debug(self, value: bool) -> None:
+ self.config["DEBUG"] = value
+
+ if self.config["TEMPLATES_AUTO_RELOAD"] is None:
+ self.jinja_env.auto_reload = value
+
+ @setupmethod
+ def register_blueprint(self, blueprint: Blueprint, **options: t.Any) -> None:
+ """Register a :class:`~flask.Blueprint` on the application. Keyword
+ arguments passed to this method will override the defaults set on the
+ blueprint.
+
+ Calls the blueprint's :meth:`~flask.Blueprint.register` method after
+ recording the blueprint in the application's :attr:`blueprints`.
+
+ :param blueprint: The blueprint to register.
+ :param url_prefix: Blueprint routes will be prefixed with this.
+ :param subdomain: Blueprint routes will match on this subdomain.
+ :param url_defaults: Blueprint routes will use these default values for
+ view arguments.
+ :param options: Additional keyword arguments are passed to
+ :class:`~flask.blueprints.BlueprintSetupState`. They can be
+ accessed in :meth:`~flask.Blueprint.record` callbacks.
+
+ .. versionchanged:: 2.0.1
+ The ``name`` option can be used to change the (pre-dotted)
+ name the blueprint is registered with. This allows the same
+ blueprint to be registered multiple times with unique names
+ for ``url_for``.
+
+ .. versionadded:: 0.7
+ """
+ blueprint.register(self, options)
+
+ def iter_blueprints(self) -> t.ValuesView[Blueprint]:
+ """Iterates over all blueprints by the order they were registered.
+
+ .. versionadded:: 0.11
+ """
+ return self.blueprints.values()
+
+ @setupmethod
+ def add_url_rule(
+ self,
+ rule: str,
+ endpoint: str | None = None,
+ view_func: ft.RouteCallable | None = None,
+ provide_automatic_options: bool | None = None,
+ **options: t.Any,
+ ) -> None:
+ if endpoint is None:
+ endpoint = _endpoint_from_view_func(view_func) # type: ignore
+ options["endpoint"] = endpoint
+ methods = options.pop("methods", None)
+
+ # if the methods are not given and the view_func object knows its
+ # methods we can use that instead. If neither exists, we go with
+ # a tuple of only ``GET`` as default.
+ if methods is None:
+ methods = getattr(view_func, "methods", None) or ("GET",)
+ if isinstance(methods, str):
+ raise TypeError(
+ "Allowed methods must be a list of strings, for"
+ ' example: @app.route(..., methods=["POST"])'
+ )
+ methods = {item.upper() for item in methods}
+
+ # Methods that should always be added
+ required_methods = set(getattr(view_func, "required_methods", ()))
+
+ # starting with Flask 0.8 the view_func object can disable and
+ # force-enable the automatic options handling.
+ if provide_automatic_options is None:
+ provide_automatic_options = getattr(
+ view_func, "provide_automatic_options", None
+ )
+
+ if provide_automatic_options is None:
+ if "OPTIONS" not in methods:
+ provide_automatic_options = True
+ required_methods.add("OPTIONS")
+ else:
+ provide_automatic_options = False
+
+ # Add the required methods now.
+ methods |= required_methods
+
+ rule_obj = self.url_rule_class(rule, methods=methods, **options)
+ rule_obj.provide_automatic_options = provide_automatic_options # type: ignore[attr-defined]
+
+ self.url_map.add(rule_obj)
+ if view_func is not None:
+ old_func = self.view_functions.get(endpoint)
+ if old_func is not None and old_func != view_func:
+ raise AssertionError(
+ "View function mapping is overwriting an existing"
+ f" endpoint function: {endpoint}"
+ )
+ self.view_functions[endpoint] = view_func
+
+ @setupmethod
+ def template_filter(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_filter], T_template_filter]:
+ """A decorator that is used to register custom template filter.
+ You can specify a name for the filter, otherwise the function
+ name will be used. Example::
+
+ @app.template_filter()
+ def reverse(s):
+ return s[::-1]
+
+ :param name: the optional name of the filter, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_filter) -> T_template_filter:
+ self.add_template_filter(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_template_filter(
+ self, f: ft.TemplateFilterCallable, name: str | None = None
+ ) -> None:
+ """Register a custom template filter. Works exactly like the
+ :meth:`template_filter` decorator.
+
+ :param name: the optional name of the filter, otherwise the
+ function name will be used.
+ """
+ self.jinja_env.filters[name or f.__name__] = f
+
+ @setupmethod
+ def template_test(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_test], T_template_test]:
+ """A decorator that is used to register custom template test.
+ You can specify a name for the test, otherwise the function
+ name will be used. Example::
+
+ @app.template_test()
+ def is_prime(n):
+ if n == 2:
+ return True
+ for i in range(2, int(math.ceil(math.sqrt(n))) + 1):
+ if n % i == 0:
+ return False
+ return True
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the test, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_test) -> T_template_test:
+ self.add_template_test(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_template_test(
+ self, f: ft.TemplateTestCallable, name: str | None = None
+ ) -> None:
+ """Register a custom template test. Works exactly like the
+ :meth:`template_test` decorator.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the test, otherwise the
+ function name will be used.
+ """
+ self.jinja_env.tests[name or f.__name__] = f
+
+ @setupmethod
+ def template_global(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_global], T_template_global]:
+ """A decorator that is used to register a custom template global function.
+ You can specify a name for the global function, otherwise the function
+ name will be used. Example::
+
+ @app.template_global()
+ def double(n):
+ return 2 * n
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the global function, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_global) -> T_template_global:
+ self.add_template_global(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_template_global(
+ self, f: ft.TemplateGlobalCallable, name: str | None = None
+ ) -> None:
+ """Register a custom template global function. Works exactly like the
+ :meth:`template_global` decorator.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the global function, otherwise the
+ function name will be used.
+ """
+ self.jinja_env.globals[name or f.__name__] = f
+
+ @setupmethod
+ def teardown_appcontext(self, f: T_teardown) -> T_teardown:
+ """Registers a function to be called when the application
+ context is popped. The application context is typically popped
+ after the request context for each request, at the end of CLI
+ commands, or after a manually pushed context ends.
+
+ .. code-block:: python
+
+ with app.app_context():
+ ...
+
+ When the ``with`` block exits (or ``ctx.pop()`` is called), the
+ teardown functions are called just before the app context is
+ made inactive. Since a request context typically also manages an
+ application context it would also be called when you pop a
+ request context.
+
+ When a teardown function was called because of an unhandled
+ exception it will be passed an error object. If an
+ :meth:`errorhandler` is registered, it will handle the exception
+ and the teardown will not receive it.
+
+ Teardown functions must avoid raising exceptions. If they
+ execute code that might fail they must surround that code with a
+ ``try``/``except`` block and log any errors.
+
+ The return values of teardown functions are ignored.
+
+ .. versionadded:: 0.9
+ """
+ self.teardown_appcontext_funcs.append(f)
+ return f
+
+ @setupmethod
+ def shell_context_processor(
+ self, f: T_shell_context_processor
+ ) -> T_shell_context_processor:
+ """Registers a shell context processor function.
+
+ .. versionadded:: 0.11
+ """
+ self.shell_context_processors.append(f)
+ return f
+
+ def _find_error_handler(
+ self, e: Exception, blueprints: list[str]
+ ) -> ft.ErrorHandlerCallable | None:
+ """Return a registered error handler for an exception in this order:
+ blueprint handler for a specific code, app handler for a specific code,
+ blueprint handler for an exception class, app handler for an exception
+ class, or ``None`` if a suitable handler is not found.
+ """
+ exc_class, code = self._get_exc_class_and_code(type(e))
+ names = (*blueprints, None)
+
+ for c in (code, None) if code is not None else (None,):
+ for name in names:
+ handler_map = self.error_handler_spec[name][c]
+
+ if not handler_map:
+ continue
+
+ for cls in exc_class.__mro__:
+ handler = handler_map.get(cls)
+
+ if handler is not None:
+ return handler
+ return None
+
+ def trap_http_exception(self, e: Exception) -> bool:
+ """Checks if an HTTP exception should be trapped or not. By default
+ this will return ``False`` for all exceptions except for a bad request
+ key error if ``TRAP_BAD_REQUEST_ERRORS`` is set to ``True``. It
+ also returns ``True`` if ``TRAP_HTTP_EXCEPTIONS`` is set to ``True``.
+
+ This is called for all HTTP exceptions raised by a view function.
+ If it returns ``True`` for any exception the error handler for this
+ exception is not called and it shows up as regular exception in the
+ traceback. This is helpful for debugging implicitly raised HTTP
+ exceptions.
+
+ .. versionchanged:: 1.0
+ Bad request errors are not trapped by default in debug mode.
+
+ .. versionadded:: 0.8
+ """
+ if self.config["TRAP_HTTP_EXCEPTIONS"]:
+ return True
+
+ trap_bad_request = self.config["TRAP_BAD_REQUEST_ERRORS"]
+
+ # if unset, trap key errors in debug mode
+ if (
+ trap_bad_request is None
+ and self.debug
+ and isinstance(e, BadRequestKeyError)
+ ):
+ return True
+
+ if trap_bad_request:
+ return isinstance(e, BadRequest)
+
+ return False
+
+ def should_ignore_error(self, error: BaseException | None) -> bool:
+ """This is called to figure out if an error should be ignored
+ or not as far as the teardown system is concerned. If this
+ function returns ``True`` then the teardown handlers will not be
+ passed the error.
+
+ .. versionadded:: 0.10
+ """
+ return False
+
+ def redirect(self, location: str, code: int = 302) -> BaseResponse:
+ """Create a redirect response object.
+
+ This is called by :func:`flask.redirect`, and can be called
+ directly as well.
+
+ :param location: The URL to redirect to.
+ :param code: The status code for the redirect.
+
+ .. versionadded:: 2.2
+ Moved from ``flask.redirect``, which calls this method.
+ """
+ return _wz_redirect(
+ location,
+ code=code,
+ Response=self.response_class, # type: ignore[arg-type]
+ )
+
+ def inject_url_defaults(self, endpoint: str, values: dict[str, t.Any]) -> None:
+ """Injects the URL defaults for the given endpoint directly into
+ the values dictionary passed. This is used internally and
+ automatically called on URL building.
+
+ .. versionadded:: 0.7
+ """
+ names: t.Iterable[str | None] = (None,)
+
+ # url_for may be called outside a request context, parse the
+ # passed endpoint instead of using request.blueprints.
+ if "." in endpoint:
+ names = chain(
+ names, reversed(_split_blueprint_path(endpoint.rpartition(".")[0]))
+ )
+
+ for name in names:
+ if name in self.url_default_functions:
+ for func in self.url_default_functions[name]:
+ func(endpoint, values)
+
+ def handle_url_build_error(
+ self, error: BuildError, endpoint: str, values: dict[str, t.Any]
+ ) -> str:
+ """Called by :meth:`.url_for` if a
+ :exc:`~werkzeug.routing.BuildError` was raised. If this returns
+ a value, it will be returned by ``url_for``, otherwise the error
+ will be re-raised.
+
+ Each function in :attr:`url_build_error_handlers` is called with
+ ``error``, ``endpoint`` and ``values``. If a function returns
+ ``None`` or raises a ``BuildError``, it is skipped. Otherwise,
+ its return value is returned by ``url_for``.
+
+ :param error: The active ``BuildError`` being handled.
+ :param endpoint: The endpoint being built.
+ :param values: The keyword arguments passed to ``url_for``.
+ """
+ for handler in self.url_build_error_handlers:
+ try:
+ rv = handler(error, endpoint, values)
+ except BuildError as e:
+ # make error available outside except block
+ error = e
+ else:
+ if rv is not None:
+ return rv
+
+ # Re-raise if called with an active exception, otherwise raise
+ # the passed in exception.
+ if error is sys.exc_info()[1]:
+ raise
+
+ raise error
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/blueprints.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/blueprints.py
new file mode 100644
index 0000000..4f912cc
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/blueprints.py
@@ -0,0 +1,632 @@
+from __future__ import annotations
+
+import os
+import typing as t
+from collections import defaultdict
+from functools import update_wrapper
+
+from .. import typing as ft
+from .scaffold import _endpoint_from_view_func
+from .scaffold import _sentinel
+from .scaffold import Scaffold
+from .scaffold import setupmethod
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .app import App
+
+DeferredSetupFunction = t.Callable[["BlueprintSetupState"], None]
+T_after_request = t.TypeVar("T_after_request", bound=ft.AfterRequestCallable[t.Any])
+T_before_request = t.TypeVar("T_before_request", bound=ft.BeforeRequestCallable)
+T_error_handler = t.TypeVar("T_error_handler", bound=ft.ErrorHandlerCallable)
+T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable)
+T_template_context_processor = t.TypeVar(
+ "T_template_context_processor", bound=ft.TemplateContextProcessorCallable
+)
+T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable)
+T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable)
+T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable)
+T_url_defaults = t.TypeVar("T_url_defaults", bound=ft.URLDefaultCallable)
+T_url_value_preprocessor = t.TypeVar(
+ "T_url_value_preprocessor", bound=ft.URLValuePreprocessorCallable
+)
+
+
+class BlueprintSetupState:
+ """Temporary holder object for registering a blueprint with the
+ application. An instance of this class is created by the
+ :meth:`~flask.Blueprint.make_setup_state` method and later passed
+ to all register callback functions.
+ """
+
+ def __init__(
+ self,
+ blueprint: Blueprint,
+ app: App,
+ options: t.Any,
+ first_registration: bool,
+ ) -> None:
+ #: a reference to the current application
+ self.app = app
+
+ #: a reference to the blueprint that created this setup state.
+ self.blueprint = blueprint
+
+ #: a dictionary with all options that were passed to the
+ #: :meth:`~flask.Flask.register_blueprint` method.
+ self.options = options
+
+ #: as blueprints can be registered multiple times with the
+ #: application and not everything wants to be registered
+ #: multiple times on it, this attribute can be used to figure
+ #: out if the blueprint was registered in the past already.
+ self.first_registration = first_registration
+
+ subdomain = self.options.get("subdomain")
+ if subdomain is None:
+ subdomain = self.blueprint.subdomain
+
+ #: The subdomain that the blueprint should be active for, ``None``
+ #: otherwise.
+ self.subdomain = subdomain
+
+ url_prefix = self.options.get("url_prefix")
+ if url_prefix is None:
+ url_prefix = self.blueprint.url_prefix
+ #: The prefix that should be used for all URLs defined on the
+ #: blueprint.
+ self.url_prefix = url_prefix
+
+ self.name = self.options.get("name", blueprint.name)
+ self.name_prefix = self.options.get("name_prefix", "")
+
+ #: A dictionary with URL defaults that is added to each and every
+ #: URL that was defined with the blueprint.
+ self.url_defaults = dict(self.blueprint.url_values_defaults)
+ self.url_defaults.update(self.options.get("url_defaults", ()))
+
+ def add_url_rule(
+ self,
+ rule: str,
+ endpoint: str | None = None,
+ view_func: ft.RouteCallable | None = None,
+ **options: t.Any,
+ ) -> None:
+ """A helper method to register a rule (and optionally a view function)
+ to the application. The endpoint is automatically prefixed with the
+ blueprint's name.
+ """
+ if self.url_prefix is not None:
+ if rule:
+ rule = "/".join((self.url_prefix.rstrip("/"), rule.lstrip("/")))
+ else:
+ rule = self.url_prefix
+ options.setdefault("subdomain", self.subdomain)
+ if endpoint is None:
+ endpoint = _endpoint_from_view_func(view_func) # type: ignore
+ defaults = self.url_defaults
+ if "defaults" in options:
+ defaults = dict(defaults, **options.pop("defaults"))
+
+ self.app.add_url_rule(
+ rule,
+ f"{self.name_prefix}.{self.name}.{endpoint}".lstrip("."),
+ view_func,
+ defaults=defaults,
+ **options,
+ )
+
+
+class Blueprint(Scaffold):
+ """Represents a blueprint, a collection of routes and other
+ app-related functions that can be registered on a real application
+ later.
+
+ A blueprint is an object that allows defining application functions
+ without requiring an application object ahead of time. It uses the
+ same decorators as :class:`~flask.Flask`, but defers the need for an
+ application by recording them for later registration.
+
+ Decorating a function with a blueprint creates a deferred function
+ that is called with :class:`~flask.blueprints.BlueprintSetupState`
+ when the blueprint is registered on an application.
+
+ See :doc:`/blueprints` for more information.
+
+ :param name: The name of the blueprint. Will be prepended to each
+ endpoint name.
+ :param import_name: The name of the blueprint package, usually
+ ``__name__``. This helps locate the ``root_path`` for the
+ blueprint.
+ :param static_folder: A folder with static files that should be
+ served by the blueprint's static route. The path is relative to
+ the blueprint's root path. Blueprint static files are disabled
+ by default.
+ :param static_url_path: The url to serve static files from.
+ Defaults to ``static_folder``. If the blueprint does not have
+ a ``url_prefix``, the app's static route will take precedence,
+ and the blueprint's static files won't be accessible.
+ :param template_folder: A folder with templates that should be added
+ to the app's template search path. The path is relative to the
+ blueprint's root path. Blueprint templates are disabled by
+ default. Blueprint templates have a lower precedence than those
+ in the app's templates folder.
+ :param url_prefix: A path to prepend to all of the blueprint's URLs,
+ to make them distinct from the rest of the app's routes.
+ :param subdomain: A subdomain that blueprint routes will match on by
+ default.
+ :param url_defaults: A dict of default values that blueprint routes
+ will receive by default.
+ :param root_path: By default, the blueprint will automatically set
+ this based on ``import_name``. In certain situations this
+ automatic detection can fail, so the path can be specified
+ manually instead.
+
+ .. versionchanged:: 1.1.0
+ Blueprints have a ``cli`` group to register nested CLI commands.
+ The ``cli_group`` parameter controls the name of the group under
+ the ``flask`` command.
+
+ .. versionadded:: 0.7
+ """
+
+ _got_registered_once = False
+
+ def __init__(
+ self,
+ name: str,
+ import_name: str,
+ static_folder: str | os.PathLike[str] | None = None,
+ static_url_path: str | None = None,
+ template_folder: str | os.PathLike[str] | None = None,
+ url_prefix: str | None = None,
+ subdomain: str | None = None,
+ url_defaults: dict[str, t.Any] | None = None,
+ root_path: str | None = None,
+ cli_group: str | None = _sentinel, # type: ignore[assignment]
+ ):
+ super().__init__(
+ import_name=import_name,
+ static_folder=static_folder,
+ static_url_path=static_url_path,
+ template_folder=template_folder,
+ root_path=root_path,
+ )
+
+ if not name:
+ raise ValueError("'name' may not be empty.")
+
+ if "." in name:
+ raise ValueError("'name' may not contain a dot '.' character.")
+
+ self.name = name
+ self.url_prefix = url_prefix
+ self.subdomain = subdomain
+ self.deferred_functions: list[DeferredSetupFunction] = []
+
+ if url_defaults is None:
+ url_defaults = {}
+
+ self.url_values_defaults = url_defaults
+ self.cli_group = cli_group
+ self._blueprints: list[tuple[Blueprint, dict[str, t.Any]]] = []
+
+ def _check_setup_finished(self, f_name: str) -> None:
+ if self._got_registered_once:
+ raise AssertionError(
+ f"The setup method '{f_name}' can no longer be called on the blueprint"
+ f" '{self.name}'. It has already been registered at least once, any"
+ " changes will not be applied consistently.\n"
+ "Make sure all imports, decorators, functions, etc. needed to set up"
+ " the blueprint are done before registering it."
+ )
+
+ @setupmethod
+ def record(self, func: DeferredSetupFunction) -> None:
+ """Registers a function that is called when the blueprint is
+ registered on the application. This function is called with the
+ state as argument as returned by the :meth:`make_setup_state`
+ method.
+ """
+ self.deferred_functions.append(func)
+
+ @setupmethod
+ def record_once(self, func: DeferredSetupFunction) -> None:
+ """Works like :meth:`record` but wraps the function in another
+ function that will ensure the function is only called once. If the
+ blueprint is registered a second time on the application, the
+ function passed is not called.
+ """
+
+ def wrapper(state: BlueprintSetupState) -> None:
+ if state.first_registration:
+ func(state)
+
+ self.record(update_wrapper(wrapper, func))
+
+ def make_setup_state(
+ self, app: App, options: dict[str, t.Any], first_registration: bool = False
+ ) -> BlueprintSetupState:
+ """Creates an instance of :meth:`~flask.blueprints.BlueprintSetupState`
+ object that is later passed to the register callback functions.
+ Subclasses can override this to return a subclass of the setup state.
+ """
+ return BlueprintSetupState(self, app, options, first_registration)
+
+ @setupmethod
+ def register_blueprint(self, blueprint: Blueprint, **options: t.Any) -> None:
+ """Register a :class:`~flask.Blueprint` on this blueprint. Keyword
+ arguments passed to this method will override the defaults set
+ on the blueprint.
+
+ .. versionchanged:: 2.0.1
+ The ``name`` option can be used to change the (pre-dotted)
+ name the blueprint is registered with. This allows the same
+ blueprint to be registered multiple times with unique names
+ for ``url_for``.
+
+ .. versionadded:: 2.0
+ """
+ if blueprint is self:
+ raise ValueError("Cannot register a blueprint on itself")
+ self._blueprints.append((blueprint, options))
+
+ def register(self, app: App, options: dict[str, t.Any]) -> None:
+ """Called by :meth:`Flask.register_blueprint` to register all
+ views and callbacks registered on the blueprint with the
+ application. Creates a :class:`.BlueprintSetupState` and calls
+ each :meth:`record` callback with it.
+
+ :param app: The application this blueprint is being registered
+ with.
+ :param options: Keyword arguments forwarded from
+ :meth:`~Flask.register_blueprint`.
+
+ .. versionchanged:: 2.3
+ Nested blueprints now correctly apply subdomains.
+
+ .. versionchanged:: 2.1
+ Registering the same blueprint with the same name multiple
+ times is an error.
+
+ .. versionchanged:: 2.0.1
+ Nested blueprints are registered with their dotted name.
+ This allows different blueprints with the same name to be
+ nested at different locations.
+
+ .. versionchanged:: 2.0.1
+ The ``name`` option can be used to change the (pre-dotted)
+ name the blueprint is registered with. This allows the same
+ blueprint to be registered multiple times with unique names
+ for ``url_for``.
+ """
+ name_prefix = options.get("name_prefix", "")
+ self_name = options.get("name", self.name)
+ name = f"{name_prefix}.{self_name}".lstrip(".")
+
+ if name in app.blueprints:
+ bp_desc = "this" if app.blueprints[name] is self else "a different"
+ existing_at = f" '{name}'" if self_name != name else ""
+
+ raise ValueError(
+ f"The name '{self_name}' is already registered for"
+ f" {bp_desc} blueprint{existing_at}. Use 'name=' to"
+ f" provide a unique name."
+ )
+
+ first_bp_registration = not any(bp is self for bp in app.blueprints.values())
+ first_name_registration = name not in app.blueprints
+
+ app.blueprints[name] = self
+ self._got_registered_once = True
+ state = self.make_setup_state(app, options, first_bp_registration)
+
+ if self.has_static_folder:
+ state.add_url_rule(
+ f"{self.static_url_path}/",
+ view_func=self.send_static_file, # type: ignore[attr-defined]
+ endpoint="static",
+ )
+
+ # Merge blueprint data into parent.
+ if first_bp_registration or first_name_registration:
+ self._merge_blueprint_funcs(app, name)
+
+ for deferred in self.deferred_functions:
+ deferred(state)
+
+ cli_resolved_group = options.get("cli_group", self.cli_group)
+
+ if self.cli.commands:
+ if cli_resolved_group is None:
+ app.cli.commands.update(self.cli.commands)
+ elif cli_resolved_group is _sentinel:
+ self.cli.name = name
+ app.cli.add_command(self.cli)
+ else:
+ self.cli.name = cli_resolved_group
+ app.cli.add_command(self.cli)
+
+ for blueprint, bp_options in self._blueprints:
+ bp_options = bp_options.copy()
+ bp_url_prefix = bp_options.get("url_prefix")
+ bp_subdomain = bp_options.get("subdomain")
+
+ if bp_subdomain is None:
+ bp_subdomain = blueprint.subdomain
+
+ if state.subdomain is not None and bp_subdomain is not None:
+ bp_options["subdomain"] = bp_subdomain + "." + state.subdomain
+ elif bp_subdomain is not None:
+ bp_options["subdomain"] = bp_subdomain
+ elif state.subdomain is not None:
+ bp_options["subdomain"] = state.subdomain
+
+ if bp_url_prefix is None:
+ bp_url_prefix = blueprint.url_prefix
+
+ if state.url_prefix is not None and bp_url_prefix is not None:
+ bp_options["url_prefix"] = (
+ state.url_prefix.rstrip("/") + "/" + bp_url_prefix.lstrip("/")
+ )
+ elif bp_url_prefix is not None:
+ bp_options["url_prefix"] = bp_url_prefix
+ elif state.url_prefix is not None:
+ bp_options["url_prefix"] = state.url_prefix
+
+ bp_options["name_prefix"] = name
+ blueprint.register(app, bp_options)
+
+ def _merge_blueprint_funcs(self, app: App, name: str) -> None:
+ def extend(
+ bp_dict: dict[ft.AppOrBlueprintKey, list[t.Any]],
+ parent_dict: dict[ft.AppOrBlueprintKey, list[t.Any]],
+ ) -> None:
+ for key, values in bp_dict.items():
+ key = name if key is None else f"{name}.{key}"
+ parent_dict[key].extend(values)
+
+ for key, value in self.error_handler_spec.items():
+ key = name if key is None else f"{name}.{key}"
+ value = defaultdict(
+ dict,
+ {
+ code: {exc_class: func for exc_class, func in code_values.items()}
+ for code, code_values in value.items()
+ },
+ )
+ app.error_handler_spec[key] = value
+
+ for endpoint, func in self.view_functions.items():
+ app.view_functions[endpoint] = func
+
+ extend(self.before_request_funcs, app.before_request_funcs)
+ extend(self.after_request_funcs, app.after_request_funcs)
+ extend(
+ self.teardown_request_funcs,
+ app.teardown_request_funcs,
+ )
+ extend(self.url_default_functions, app.url_default_functions)
+ extend(self.url_value_preprocessors, app.url_value_preprocessors)
+ extend(self.template_context_processors, app.template_context_processors)
+
+ @setupmethod
+ def add_url_rule(
+ self,
+ rule: str,
+ endpoint: str | None = None,
+ view_func: ft.RouteCallable | None = None,
+ provide_automatic_options: bool | None = None,
+ **options: t.Any,
+ ) -> None:
+ """Register a URL rule with the blueprint. See :meth:`.Flask.add_url_rule` for
+ full documentation.
+
+ The URL rule is prefixed with the blueprint's URL prefix. The endpoint name,
+ used with :func:`url_for`, is prefixed with the blueprint's name.
+ """
+ if endpoint and "." in endpoint:
+ raise ValueError("'endpoint' may not contain a dot '.' character.")
+
+ if view_func and hasattr(view_func, "__name__") and "." in view_func.__name__:
+ raise ValueError("'view_func' name may not contain a dot '.' character.")
+
+ self.record(
+ lambda s: s.add_url_rule(
+ rule,
+ endpoint,
+ view_func,
+ provide_automatic_options=provide_automatic_options,
+ **options,
+ )
+ )
+
+ @setupmethod
+ def app_template_filter(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_filter], T_template_filter]:
+ """Register a template filter, available in any template rendered by the
+ application. Equivalent to :meth:`.Flask.template_filter`.
+
+ :param name: the optional name of the filter, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_filter) -> T_template_filter:
+ self.add_app_template_filter(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_app_template_filter(
+ self, f: ft.TemplateFilterCallable, name: str | None = None
+ ) -> None:
+ """Register a template filter, available in any template rendered by the
+ application. Works like the :meth:`app_template_filter` decorator. Equivalent to
+ :meth:`.Flask.add_template_filter`.
+
+ :param name: the optional name of the filter, otherwise the
+ function name will be used.
+ """
+
+ def register_template(state: BlueprintSetupState) -> None:
+ state.app.jinja_env.filters[name or f.__name__] = f
+
+ self.record_once(register_template)
+
+ @setupmethod
+ def app_template_test(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_test], T_template_test]:
+ """Register a template test, available in any template rendered by the
+ application. Equivalent to :meth:`.Flask.template_test`.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the test, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_test) -> T_template_test:
+ self.add_app_template_test(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_app_template_test(
+ self, f: ft.TemplateTestCallable, name: str | None = None
+ ) -> None:
+ """Register a template test, available in any template rendered by the
+ application. Works like the :meth:`app_template_test` decorator. Equivalent to
+ :meth:`.Flask.add_template_test`.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the test, otherwise the
+ function name will be used.
+ """
+
+ def register_template(state: BlueprintSetupState) -> None:
+ state.app.jinja_env.tests[name or f.__name__] = f
+
+ self.record_once(register_template)
+
+ @setupmethod
+ def app_template_global(
+ self, name: str | None = None
+ ) -> t.Callable[[T_template_global], T_template_global]:
+ """Register a template global, available in any template rendered by the
+ application. Equivalent to :meth:`.Flask.template_global`.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the global, otherwise the
+ function name will be used.
+ """
+
+ def decorator(f: T_template_global) -> T_template_global:
+ self.add_app_template_global(f, name=name)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_app_template_global(
+ self, f: ft.TemplateGlobalCallable, name: str | None = None
+ ) -> None:
+ """Register a template global, available in any template rendered by the
+ application. Works like the :meth:`app_template_global` decorator. Equivalent to
+ :meth:`.Flask.add_template_global`.
+
+ .. versionadded:: 0.10
+
+ :param name: the optional name of the global, otherwise the
+ function name will be used.
+ """
+
+ def register_template(state: BlueprintSetupState) -> None:
+ state.app.jinja_env.globals[name or f.__name__] = f
+
+ self.record_once(register_template)
+
+ @setupmethod
+ def before_app_request(self, f: T_before_request) -> T_before_request:
+ """Like :meth:`before_request`, but before every request, not only those handled
+ by the blueprint. Equivalent to :meth:`.Flask.before_request`.
+ """
+ self.record_once(
+ lambda s: s.app.before_request_funcs.setdefault(None, []).append(f)
+ )
+ return f
+
+ @setupmethod
+ def after_app_request(self, f: T_after_request) -> T_after_request:
+ """Like :meth:`after_request`, but after every request, not only those handled
+ by the blueprint. Equivalent to :meth:`.Flask.after_request`.
+ """
+ self.record_once(
+ lambda s: s.app.after_request_funcs.setdefault(None, []).append(f)
+ )
+ return f
+
+ @setupmethod
+ def teardown_app_request(self, f: T_teardown) -> T_teardown:
+ """Like :meth:`teardown_request`, but after every request, not only those
+ handled by the blueprint. Equivalent to :meth:`.Flask.teardown_request`.
+ """
+ self.record_once(
+ lambda s: s.app.teardown_request_funcs.setdefault(None, []).append(f)
+ )
+ return f
+
+ @setupmethod
+ def app_context_processor(
+ self, f: T_template_context_processor
+ ) -> T_template_context_processor:
+ """Like :meth:`context_processor`, but for templates rendered by every view, not
+ only by the blueprint. Equivalent to :meth:`.Flask.context_processor`.
+ """
+ self.record_once(
+ lambda s: s.app.template_context_processors.setdefault(None, []).append(f)
+ )
+ return f
+
+ @setupmethod
+ def app_errorhandler(
+ self, code: type[Exception] | int
+ ) -> t.Callable[[T_error_handler], T_error_handler]:
+ """Like :meth:`errorhandler`, but for every request, not only those handled by
+ the blueprint. Equivalent to :meth:`.Flask.errorhandler`.
+ """
+
+ def decorator(f: T_error_handler) -> T_error_handler:
+ def from_blueprint(state: BlueprintSetupState) -> None:
+ state.app.errorhandler(code)(f)
+
+ self.record_once(from_blueprint)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def app_url_value_preprocessor(
+ self, f: T_url_value_preprocessor
+ ) -> T_url_value_preprocessor:
+ """Like :meth:`url_value_preprocessor`, but for every request, not only those
+ handled by the blueprint. Equivalent to :meth:`.Flask.url_value_preprocessor`.
+ """
+ self.record_once(
+ lambda s: s.app.url_value_preprocessors.setdefault(None, []).append(f)
+ )
+ return f
+
+ @setupmethod
+ def app_url_defaults(self, f: T_url_defaults) -> T_url_defaults:
+ """Like :meth:`url_defaults`, but for every request, not only those handled by
+ the blueprint. Equivalent to :meth:`.Flask.url_defaults`.
+ """
+ self.record_once(
+ lambda s: s.app.url_default_functions.setdefault(None, []).append(f)
+ )
+ return f
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/scaffold.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/scaffold.py
new file mode 100644
index 0000000..69e33a0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/sansio/scaffold.py
@@ -0,0 +1,801 @@
+from __future__ import annotations
+
+import importlib.util
+import os
+import pathlib
+import sys
+import typing as t
+from collections import defaultdict
+from functools import update_wrapper
+
+from jinja2 import BaseLoader
+from jinja2 import FileSystemLoader
+from werkzeug.exceptions import default_exceptions
+from werkzeug.exceptions import HTTPException
+from werkzeug.utils import cached_property
+
+from .. import typing as ft
+from ..helpers import get_root_path
+from ..templating import _default_template_ctx_processor
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from click import Group
+
+# a singleton sentinel value for parameter defaults
+_sentinel = object()
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+T_after_request = t.TypeVar("T_after_request", bound=ft.AfterRequestCallable[t.Any])
+T_before_request = t.TypeVar("T_before_request", bound=ft.BeforeRequestCallable)
+T_error_handler = t.TypeVar("T_error_handler", bound=ft.ErrorHandlerCallable)
+T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable)
+T_template_context_processor = t.TypeVar(
+ "T_template_context_processor", bound=ft.TemplateContextProcessorCallable
+)
+T_url_defaults = t.TypeVar("T_url_defaults", bound=ft.URLDefaultCallable)
+T_url_value_preprocessor = t.TypeVar(
+ "T_url_value_preprocessor", bound=ft.URLValuePreprocessorCallable
+)
+T_route = t.TypeVar("T_route", bound=ft.RouteCallable)
+
+
+def setupmethod(f: F) -> F:
+ f_name = f.__name__
+
+ def wrapper_func(self: Scaffold, *args: t.Any, **kwargs: t.Any) -> t.Any:
+ self._check_setup_finished(f_name)
+ return f(self, *args, **kwargs)
+
+ return t.cast(F, update_wrapper(wrapper_func, f))
+
+
+class Scaffold:
+ """Common behavior shared between :class:`~flask.Flask` and
+ :class:`~flask.blueprints.Blueprint`.
+
+ :param import_name: The import name of the module where this object
+ is defined. Usually :attr:`__name__` should be used.
+ :param static_folder: Path to a folder of static files to serve.
+ If this is set, a static route will be added.
+ :param static_url_path: URL prefix for the static route.
+ :param template_folder: Path to a folder containing template files.
+ for rendering. If this is set, a Jinja loader will be added.
+ :param root_path: The path that static, template, and resource files
+ are relative to. Typically not set, it is discovered based on
+ the ``import_name``.
+
+ .. versionadded:: 2.0
+ """
+
+ cli: Group
+ name: str
+ _static_folder: str | None = None
+ _static_url_path: str | None = None
+
+ def __init__(
+ self,
+ import_name: str,
+ static_folder: str | os.PathLike[str] | None = None,
+ static_url_path: str | None = None,
+ template_folder: str | os.PathLike[str] | None = None,
+ root_path: str | None = None,
+ ):
+ #: The name of the package or module that this object belongs
+ #: to. Do not change this once it is set by the constructor.
+ self.import_name = import_name
+
+ self.static_folder = static_folder # type: ignore
+ self.static_url_path = static_url_path
+
+ #: The path to the templates folder, relative to
+ #: :attr:`root_path`, to add to the template loader. ``None`` if
+ #: templates should not be added.
+ self.template_folder = template_folder
+
+ if root_path is None:
+ root_path = get_root_path(self.import_name)
+
+ #: Absolute path to the package on the filesystem. Used to look
+ #: up resources contained in the package.
+ self.root_path = root_path
+
+ #: A dictionary mapping endpoint names to view functions.
+ #:
+ #: To register a view function, use the :meth:`route` decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.view_functions: dict[str, ft.RouteCallable] = {}
+
+ #: A data structure of registered error handlers, in the format
+ #: ``{scope: {code: {class: handler}}}``. The ``scope`` key is
+ #: the name of a blueprint the handlers are active for, or
+ #: ``None`` for all requests. The ``code`` key is the HTTP
+ #: status code for ``HTTPException``, or ``None`` for
+ #: other exceptions. The innermost dictionary maps exception
+ #: classes to handler functions.
+ #:
+ #: To register an error handler, use the :meth:`errorhandler`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.error_handler_spec: dict[
+ ft.AppOrBlueprintKey,
+ dict[int | None, dict[type[Exception], ft.ErrorHandlerCallable]],
+ ] = defaultdict(lambda: defaultdict(dict))
+
+ #: A data structure of functions to call at the beginning of
+ #: each request, in the format ``{scope: [functions]}``. The
+ #: ``scope`` key is the name of a blueprint the functions are
+ #: active for, or ``None`` for all requests.
+ #:
+ #: To register a function, use the :meth:`before_request`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.before_request_funcs: dict[
+ ft.AppOrBlueprintKey, list[ft.BeforeRequestCallable]
+ ] = defaultdict(list)
+
+ #: A data structure of functions to call at the end of each
+ #: request, in the format ``{scope: [functions]}``. The
+ #: ``scope`` key is the name of a blueprint the functions are
+ #: active for, or ``None`` for all requests.
+ #:
+ #: To register a function, use the :meth:`after_request`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.after_request_funcs: dict[
+ ft.AppOrBlueprintKey, list[ft.AfterRequestCallable[t.Any]]
+ ] = defaultdict(list)
+
+ #: A data structure of functions to call at the end of each
+ #: request even if an exception is raised, in the format
+ #: ``{scope: [functions]}``. The ``scope`` key is the name of a
+ #: blueprint the functions are active for, or ``None`` for all
+ #: requests.
+ #:
+ #: To register a function, use the :meth:`teardown_request`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.teardown_request_funcs: dict[
+ ft.AppOrBlueprintKey, list[ft.TeardownCallable]
+ ] = defaultdict(list)
+
+ #: A data structure of functions to call to pass extra context
+ #: values when rendering templates, in the format
+ #: ``{scope: [functions]}``. The ``scope`` key is the name of a
+ #: blueprint the functions are active for, or ``None`` for all
+ #: requests.
+ #:
+ #: To register a function, use the :meth:`context_processor`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.template_context_processors: dict[
+ ft.AppOrBlueprintKey, list[ft.TemplateContextProcessorCallable]
+ ] = defaultdict(list, {None: [_default_template_ctx_processor]})
+
+ #: A data structure of functions to call to modify the keyword
+ #: arguments passed to the view function, in the format
+ #: ``{scope: [functions]}``. The ``scope`` key is the name of a
+ #: blueprint the functions are active for, or ``None`` for all
+ #: requests.
+ #:
+ #: To register a function, use the
+ #: :meth:`url_value_preprocessor` decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.url_value_preprocessors: dict[
+ ft.AppOrBlueprintKey,
+ list[ft.URLValuePreprocessorCallable],
+ ] = defaultdict(list)
+
+ #: A data structure of functions to call to modify the keyword
+ #: arguments when generating URLs, in the format
+ #: ``{scope: [functions]}``. The ``scope`` key is the name of a
+ #: blueprint the functions are active for, or ``None`` for all
+ #: requests.
+ #:
+ #: To register a function, use the :meth:`url_defaults`
+ #: decorator.
+ #:
+ #: This data structure is internal. It should not be modified
+ #: directly and its format may change at any time.
+ self.url_default_functions: dict[
+ ft.AppOrBlueprintKey, list[ft.URLDefaultCallable]
+ ] = defaultdict(list)
+
+ def __repr__(self) -> str:
+ return f"<{type(self).__name__} {self.name!r}>"
+
+ def _check_setup_finished(self, f_name: str) -> None:
+ raise NotImplementedError
+
+ @property
+ def static_folder(self) -> str | None:
+ """The absolute path to the configured static folder. ``None``
+ if no static folder is set.
+ """
+ if self._static_folder is not None:
+ return os.path.join(self.root_path, self._static_folder)
+ else:
+ return None
+
+ @static_folder.setter
+ def static_folder(self, value: str | os.PathLike[str] | None) -> None:
+ if value is not None:
+ value = os.fspath(value).rstrip(r"\/")
+
+ self._static_folder = value
+
+ @property
+ def has_static_folder(self) -> bool:
+ """``True`` if :attr:`static_folder` is set.
+
+ .. versionadded:: 0.5
+ """
+ return self.static_folder is not None
+
+ @property
+ def static_url_path(self) -> str | None:
+ """The URL prefix that the static route will be accessible from.
+
+ If it was not configured during init, it is derived from
+ :attr:`static_folder`.
+ """
+ if self._static_url_path is not None:
+ return self._static_url_path
+
+ if self.static_folder is not None:
+ basename = os.path.basename(self.static_folder)
+ return f"/{basename}".rstrip("/")
+
+ return None
+
+ @static_url_path.setter
+ def static_url_path(self, value: str | None) -> None:
+ if value is not None:
+ value = value.rstrip("/")
+
+ self._static_url_path = value
+
+ @cached_property
+ def jinja_loader(self) -> BaseLoader | None:
+ """The Jinja loader for this object's templates. By default this
+ is a class :class:`jinja2.loaders.FileSystemLoader` to
+ :attr:`template_folder` if it is set.
+
+ .. versionadded:: 0.5
+ """
+ if self.template_folder is not None:
+ return FileSystemLoader(os.path.join(self.root_path, self.template_folder))
+ else:
+ return None
+
+ def _method_route(
+ self,
+ method: str,
+ rule: str,
+ options: dict[str, t.Any],
+ ) -> t.Callable[[T_route], T_route]:
+ if "methods" in options:
+ raise TypeError("Use the 'route' decorator to use the 'methods' argument.")
+
+ return self.route(rule, methods=[method], **options)
+
+ @setupmethod
+ def get(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Shortcut for :meth:`route` with ``methods=["GET"]``.
+
+ .. versionadded:: 2.0
+ """
+ return self._method_route("GET", rule, options)
+
+ @setupmethod
+ def post(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Shortcut for :meth:`route` with ``methods=["POST"]``.
+
+ .. versionadded:: 2.0
+ """
+ return self._method_route("POST", rule, options)
+
+ @setupmethod
+ def put(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Shortcut for :meth:`route` with ``methods=["PUT"]``.
+
+ .. versionadded:: 2.0
+ """
+ return self._method_route("PUT", rule, options)
+
+ @setupmethod
+ def delete(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Shortcut for :meth:`route` with ``methods=["DELETE"]``.
+
+ .. versionadded:: 2.0
+ """
+ return self._method_route("DELETE", rule, options)
+
+ @setupmethod
+ def patch(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Shortcut for :meth:`route` with ``methods=["PATCH"]``.
+
+ .. versionadded:: 2.0
+ """
+ return self._method_route("PATCH", rule, options)
+
+ @setupmethod
+ def route(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]:
+ """Decorate a view function to register it with the given URL
+ rule and options. Calls :meth:`add_url_rule`, which has more
+ details about the implementation.
+
+ .. code-block:: python
+
+ @app.route("/")
+ def index():
+ return "Hello, World!"
+
+ See :ref:`url-route-registrations`.
+
+ The endpoint name for the route defaults to the name of the view
+ function if the ``endpoint`` parameter isn't passed.
+
+ The ``methods`` parameter defaults to ``["GET"]``. ``HEAD`` and
+ ``OPTIONS`` are added automatically.
+
+ :param rule: The URL rule string.
+ :param options: Extra options passed to the
+ :class:`~werkzeug.routing.Rule` object.
+ """
+
+ def decorator(f: T_route) -> T_route:
+ endpoint = options.pop("endpoint", None)
+ self.add_url_rule(rule, endpoint, f, **options)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def add_url_rule(
+ self,
+ rule: str,
+ endpoint: str | None = None,
+ view_func: ft.RouteCallable | None = None,
+ provide_automatic_options: bool | None = None,
+ **options: t.Any,
+ ) -> None:
+ """Register a rule for routing incoming requests and building
+ URLs. The :meth:`route` decorator is a shortcut to call this
+ with the ``view_func`` argument. These are equivalent:
+
+ .. code-block:: python
+
+ @app.route("/")
+ def index():
+ ...
+
+ .. code-block:: python
+
+ def index():
+ ...
+
+ app.add_url_rule("/", view_func=index)
+
+ See :ref:`url-route-registrations`.
+
+ The endpoint name for the route defaults to the name of the view
+ function if the ``endpoint`` parameter isn't passed. An error
+ will be raised if a function has already been registered for the
+ endpoint.
+
+ The ``methods`` parameter defaults to ``["GET"]``. ``HEAD`` is
+ always added automatically, and ``OPTIONS`` is added
+ automatically by default.
+
+ ``view_func`` does not necessarily need to be passed, but if the
+ rule should participate in routing an endpoint name must be
+ associated with a view function at some point with the
+ :meth:`endpoint` decorator.
+
+ .. code-block:: python
+
+ app.add_url_rule("/", endpoint="index")
+
+ @app.endpoint("index")
+ def index():
+ ...
+
+ If ``view_func`` has a ``required_methods`` attribute, those
+ methods are added to the passed and automatic methods. If it
+ has a ``provide_automatic_methods`` attribute, it is used as the
+ default if the parameter is not passed.
+
+ :param rule: The URL rule string.
+ :param endpoint: The endpoint name to associate with the rule
+ and view function. Used when routing and building URLs.
+ Defaults to ``view_func.__name__``.
+ :param view_func: The view function to associate with the
+ endpoint name.
+ :param provide_automatic_options: Add the ``OPTIONS`` method and
+ respond to ``OPTIONS`` requests automatically.
+ :param options: Extra options passed to the
+ :class:`~werkzeug.routing.Rule` object.
+ """
+ raise NotImplementedError
+
+ @setupmethod
+ def endpoint(self, endpoint: str) -> t.Callable[[F], F]:
+ """Decorate a view function to register it for the given
+ endpoint. Used if a rule is added without a ``view_func`` with
+ :meth:`add_url_rule`.
+
+ .. code-block:: python
+
+ app.add_url_rule("/ex", endpoint="example")
+
+ @app.endpoint("example")
+ def example():
+ ...
+
+ :param endpoint: The endpoint name to associate with the view
+ function.
+ """
+
+ def decorator(f: F) -> F:
+ self.view_functions[endpoint] = f
+ return f
+
+ return decorator
+
+ @setupmethod
+ def before_request(self, f: T_before_request) -> T_before_request:
+ """Register a function to run before each request.
+
+ For example, this can be used to open a database connection, or
+ to load the logged in user from the session.
+
+ .. code-block:: python
+
+ @app.before_request
+ def load_user():
+ if "user_id" in session:
+ g.user = db.session.get(session["user_id"])
+
+ The function will be called without any arguments. If it returns
+ a non-``None`` value, the value is handled as if it was the
+ return value from the view, and further request handling is
+ stopped.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ executes before every request. When used on a blueprint, this executes before
+ every request that the blueprint handles. To register with a blueprint and
+ execute before every request, use :meth:`.Blueprint.before_app_request`.
+ """
+ self.before_request_funcs.setdefault(None, []).append(f)
+ return f
+
+ @setupmethod
+ def after_request(self, f: T_after_request) -> T_after_request:
+ """Register a function to run after each request to this object.
+
+ The function is called with the response object, and must return
+ a response object. This allows the functions to modify or
+ replace the response before it is sent.
+
+ If a function raises an exception, any remaining
+ ``after_request`` functions will not be called. Therefore, this
+ should not be used for actions that must execute, such as to
+ close resources. Use :meth:`teardown_request` for that.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ executes after every request. When used on a blueprint, this executes after
+ every request that the blueprint handles. To register with a blueprint and
+ execute after every request, use :meth:`.Blueprint.after_app_request`.
+ """
+ self.after_request_funcs.setdefault(None, []).append(f)
+ return f
+
+ @setupmethod
+ def teardown_request(self, f: T_teardown) -> T_teardown:
+ """Register a function to be called when the request context is
+ popped. Typically this happens at the end of each request, but
+ contexts may be pushed manually as well during testing.
+
+ .. code-block:: python
+
+ with app.test_request_context():
+ ...
+
+ When the ``with`` block exits (or ``ctx.pop()`` is called), the
+ teardown functions are called just before the request context is
+ made inactive.
+
+ When a teardown function was called because of an unhandled
+ exception it will be passed an error object. If an
+ :meth:`errorhandler` is registered, it will handle the exception
+ and the teardown will not receive it.
+
+ Teardown functions must avoid raising exceptions. If they
+ execute code that might fail they must surround that code with a
+ ``try``/``except`` block and log any errors.
+
+ The return values of teardown functions are ignored.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ executes after every request. When used on a blueprint, this executes after
+ every request that the blueprint handles. To register with a blueprint and
+ execute after every request, use :meth:`.Blueprint.teardown_app_request`.
+ """
+ self.teardown_request_funcs.setdefault(None, []).append(f)
+ return f
+
+ @setupmethod
+ def context_processor(
+ self,
+ f: T_template_context_processor,
+ ) -> T_template_context_processor:
+ """Registers a template context processor function. These functions run before
+ rendering a template. The keys of the returned dict are added as variables
+ available in the template.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ is called for every rendered template. When used on a blueprint, this is called
+ for templates rendered from the blueprint's views. To register with a blueprint
+ and affect every template, use :meth:`.Blueprint.app_context_processor`.
+ """
+ self.template_context_processors[None].append(f)
+ return f
+
+ @setupmethod
+ def url_value_preprocessor(
+ self,
+ f: T_url_value_preprocessor,
+ ) -> T_url_value_preprocessor:
+ """Register a URL value preprocessor function for all view
+ functions in the application. These functions will be called before the
+ :meth:`before_request` functions.
+
+ The function can modify the values captured from the matched url before
+ they are passed to the view. For example, this can be used to pop a
+ common language code value and place it in ``g`` rather than pass it to
+ every view.
+
+ The function is passed the endpoint name and values dict. The return
+ value is ignored.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ is called for every request. When used on a blueprint, this is called for
+ requests that the blueprint handles. To register with a blueprint and affect
+ every request, use :meth:`.Blueprint.app_url_value_preprocessor`.
+ """
+ self.url_value_preprocessors[None].append(f)
+ return f
+
+ @setupmethod
+ def url_defaults(self, f: T_url_defaults) -> T_url_defaults:
+ """Callback function for URL defaults for all view functions of the
+ application. It's called with the endpoint and values and should
+ update the values passed in place.
+
+ This is available on both app and blueprint objects. When used on an app, this
+ is called for every request. When used on a blueprint, this is called for
+ requests that the blueprint handles. To register with a blueprint and affect
+ every request, use :meth:`.Blueprint.app_url_defaults`.
+ """
+ self.url_default_functions[None].append(f)
+ return f
+
+ @setupmethod
+ def errorhandler(
+ self, code_or_exception: type[Exception] | int
+ ) -> t.Callable[[T_error_handler], T_error_handler]:
+ """Register a function to handle errors by code or exception class.
+
+ A decorator that is used to register a function given an
+ error code. Example::
+
+ @app.errorhandler(404)
+ def page_not_found(error):
+ return 'This page does not exist', 404
+
+ You can also register handlers for arbitrary exceptions::
+
+ @app.errorhandler(DatabaseError)
+ def special_exception_handler(error):
+ return 'Database connection failed', 500
+
+ This is available on both app and blueprint objects. When used on an app, this
+ can handle errors from every request. When used on a blueprint, this can handle
+ errors from requests that the blueprint handles. To register with a blueprint
+ and affect every request, use :meth:`.Blueprint.app_errorhandler`.
+
+ .. versionadded:: 0.7
+ Use :meth:`register_error_handler` instead of modifying
+ :attr:`error_handler_spec` directly, for application wide error
+ handlers.
+
+ .. versionadded:: 0.7
+ One can now additionally also register custom exception types
+ that do not necessarily have to be a subclass of the
+ :class:`~werkzeug.exceptions.HTTPException` class.
+
+ :param code_or_exception: the code as integer for the handler, or
+ an arbitrary exception
+ """
+
+ def decorator(f: T_error_handler) -> T_error_handler:
+ self.register_error_handler(code_or_exception, f)
+ return f
+
+ return decorator
+
+ @setupmethod
+ def register_error_handler(
+ self,
+ code_or_exception: type[Exception] | int,
+ f: ft.ErrorHandlerCallable,
+ ) -> None:
+ """Alternative error attach function to the :meth:`errorhandler`
+ decorator that is more straightforward to use for non decorator
+ usage.
+
+ .. versionadded:: 0.7
+ """
+ exc_class, code = self._get_exc_class_and_code(code_or_exception)
+ self.error_handler_spec[None][code][exc_class] = f
+
+ @staticmethod
+ def _get_exc_class_and_code(
+ exc_class_or_code: type[Exception] | int,
+ ) -> tuple[type[Exception], int | None]:
+ """Get the exception class being handled. For HTTP status codes
+ or ``HTTPException`` subclasses, return both the exception and
+ status code.
+
+ :param exc_class_or_code: Any exception class, or an HTTP status
+ code as an integer.
+ """
+ exc_class: type[Exception]
+
+ if isinstance(exc_class_or_code, int):
+ try:
+ exc_class = default_exceptions[exc_class_or_code]
+ except KeyError:
+ raise ValueError(
+ f"'{exc_class_or_code}' is not a recognized HTTP"
+ " error code. Use a subclass of HTTPException with"
+ " that code instead."
+ ) from None
+ else:
+ exc_class = exc_class_or_code
+
+ if isinstance(exc_class, Exception):
+ raise TypeError(
+ f"{exc_class!r} is an instance, not a class. Handlers"
+ " can only be registered for Exception classes or HTTP"
+ " error codes."
+ )
+
+ if not issubclass(exc_class, Exception):
+ raise ValueError(
+ f"'{exc_class.__name__}' is not a subclass of Exception."
+ " Handlers can only be registered for Exception classes"
+ " or HTTP error codes."
+ )
+
+ if issubclass(exc_class, HTTPException):
+ return exc_class, exc_class.code
+ else:
+ return exc_class, None
+
+
+def _endpoint_from_view_func(view_func: ft.RouteCallable) -> str:
+ """Internal helper that returns the default endpoint for a given
+ function. This always is the function name.
+ """
+ assert view_func is not None, "expected view func if endpoint is not provided."
+ return view_func.__name__
+
+
+def _path_is_relative_to(path: pathlib.PurePath, base: str) -> bool:
+ # Path.is_relative_to doesn't exist until Python 3.9
+ try:
+ path.relative_to(base)
+ return True
+ except ValueError:
+ return False
+
+
+def _find_package_path(import_name: str) -> str:
+ """Find the path that contains the package or module."""
+ root_mod_name, _, _ = import_name.partition(".")
+
+ try:
+ root_spec = importlib.util.find_spec(root_mod_name)
+
+ if root_spec is None:
+ raise ValueError("not found")
+ except (ImportError, ValueError):
+ # ImportError: the machinery told us it does not exist
+ # ValueError:
+ # - the module name was invalid
+ # - the module name is __main__
+ # - we raised `ValueError` due to `root_spec` being `None`
+ return os.getcwd()
+
+ if root_spec.submodule_search_locations:
+ if root_spec.origin is None or root_spec.origin == "namespace":
+ # namespace package
+ package_spec = importlib.util.find_spec(import_name)
+
+ if package_spec is not None and package_spec.submodule_search_locations:
+ # Pick the path in the namespace that contains the submodule.
+ package_path = pathlib.Path(
+ os.path.commonpath(package_spec.submodule_search_locations)
+ )
+ search_location = next(
+ location
+ for location in root_spec.submodule_search_locations
+ if _path_is_relative_to(package_path, location)
+ )
+ else:
+ # Pick the first path.
+ search_location = root_spec.submodule_search_locations[0]
+
+ return os.path.dirname(search_location)
+ else:
+ # package with __init__.py
+ return os.path.dirname(os.path.dirname(root_spec.origin))
+ else:
+ # module
+ return os.path.dirname(root_spec.origin) # type: ignore[type-var, return-value]
+
+
+def find_package(import_name: str) -> tuple[str | None, str]:
+ """Find the prefix that a package is installed under, and the path
+ that it would be imported from.
+
+ The prefix is the directory containing the standard directory
+ hierarchy (lib, bin, etc.). If the package is not installed to the
+ system (:attr:`sys.prefix`) or a virtualenv (``site-packages``),
+ ``None`` is returned.
+
+ The path is the entry in :attr:`sys.path` that contains the package
+ for import. If the package is not installed, it's assumed that the
+ package was imported from the current working directory.
+ """
+ package_path = _find_package_path(import_name)
+ py_prefix = os.path.abspath(sys.prefix)
+
+ # installed to the system
+ if _path_is_relative_to(pathlib.PurePath(package_path), py_prefix):
+ return py_prefix, package_path
+
+ site_parent, site_folder = os.path.split(package_path)
+
+ # installed to a virtualenv
+ if site_folder.lower() == "site-packages":
+ parent, folder = os.path.split(site_parent)
+
+ # Windows (prefix/lib/site-packages)
+ if folder.lower() == "lib":
+ return parent, package_path
+
+ # Unix (prefix/lib/pythonX.Y/site-packages)
+ if os.path.basename(parent).lower() == "lib":
+ return os.path.dirname(parent), package_path
+
+ # something else (prefix/site-packages)
+ return site_parent, package_path
+
+ # not installed
+ return None, package_path
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/sessions.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/sessions.py
new file mode 100644
index 0000000..ee19ad6
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/sessions.py
@@ -0,0 +1,379 @@
+from __future__ import annotations
+
+import hashlib
+import typing as t
+from collections.abc import MutableMapping
+from datetime import datetime
+from datetime import timezone
+
+from itsdangerous import BadSignature
+from itsdangerous import URLSafeTimedSerializer
+from werkzeug.datastructures import CallbackDict
+
+from .json.tag import TaggedJSONSerializer
+
+if t.TYPE_CHECKING: # pragma: no cover
+ import typing_extensions as te
+
+ from .app import Flask
+ from .wrappers import Request
+ from .wrappers import Response
+
+
+# TODO generic when Python > 3.8
+class SessionMixin(MutableMapping): # type: ignore[type-arg]
+ """Expands a basic dictionary with session attributes."""
+
+ @property
+ def permanent(self) -> bool:
+ """This reflects the ``'_permanent'`` key in the dict."""
+ return self.get("_permanent", False)
+
+ @permanent.setter
+ def permanent(self, value: bool) -> None:
+ self["_permanent"] = bool(value)
+
+ #: Some implementations can detect whether a session is newly
+ #: created, but that is not guaranteed. Use with caution. The mixin
+ # default is hard-coded ``False``.
+ new = False
+
+ #: Some implementations can detect changes to the session and set
+ #: this when that happens. The mixin default is hard coded to
+ #: ``True``.
+ modified = True
+
+ #: Some implementations can detect when session data is read or
+ #: written and set this when that happens. The mixin default is hard
+ #: coded to ``True``.
+ accessed = True
+
+
+# TODO generic when Python > 3.8
+class SecureCookieSession(CallbackDict, SessionMixin): # type: ignore[type-arg]
+ """Base class for sessions based on signed cookies.
+
+ This session backend will set the :attr:`modified` and
+ :attr:`accessed` attributes. It cannot reliably track whether a
+ session is new (vs. empty), so :attr:`new` remains hard coded to
+ ``False``.
+ """
+
+ #: When data is changed, this is set to ``True``. Only the session
+ #: dictionary itself is tracked; if the session contains mutable
+ #: data (for example a nested dict) then this must be set to
+ #: ``True`` manually when modifying that data. The session cookie
+ #: will only be written to the response if this is ``True``.
+ modified = False
+
+ #: When data is read or written, this is set to ``True``. Used by
+ # :class:`.SecureCookieSessionInterface` to add a ``Vary: Cookie``
+ #: header, which allows caching proxies to cache different pages for
+ #: different users.
+ accessed = False
+
+ def __init__(self, initial: t.Any = None) -> None:
+ def on_update(self: te.Self) -> None:
+ self.modified = True
+ self.accessed = True
+
+ super().__init__(initial, on_update)
+
+ def __getitem__(self, key: str) -> t.Any:
+ self.accessed = True
+ return super().__getitem__(key)
+
+ def get(self, key: str, default: t.Any = None) -> t.Any:
+ self.accessed = True
+ return super().get(key, default)
+
+ def setdefault(self, key: str, default: t.Any = None) -> t.Any:
+ self.accessed = True
+ return super().setdefault(key, default)
+
+
+class NullSession(SecureCookieSession):
+ """Class used to generate nicer error messages if sessions are not
+ available. Will still allow read-only access to the empty session
+ but fail on setting.
+ """
+
+ def _fail(self, *args: t.Any, **kwargs: t.Any) -> t.NoReturn:
+ raise RuntimeError(
+ "The session is unavailable because no secret "
+ "key was set. Set the secret_key on the "
+ "application to something unique and secret."
+ )
+
+ __setitem__ = __delitem__ = clear = pop = popitem = update = setdefault = _fail # type: ignore # noqa: B950
+ del _fail
+
+
+class SessionInterface:
+ """The basic interface you have to implement in order to replace the
+ default session interface which uses werkzeug's securecookie
+ implementation. The only methods you have to implement are
+ :meth:`open_session` and :meth:`save_session`, the others have
+ useful defaults which you don't need to change.
+
+ The session object returned by the :meth:`open_session` method has to
+ provide a dictionary like interface plus the properties and methods
+ from the :class:`SessionMixin`. We recommend just subclassing a dict
+ and adding that mixin::
+
+ class Session(dict, SessionMixin):
+ pass
+
+ If :meth:`open_session` returns ``None`` Flask will call into
+ :meth:`make_null_session` to create a session that acts as replacement
+ if the session support cannot work because some requirement is not
+ fulfilled. The default :class:`NullSession` class that is created
+ will complain that the secret key was not set.
+
+ To replace the session interface on an application all you have to do
+ is to assign :attr:`flask.Flask.session_interface`::
+
+ app = Flask(__name__)
+ app.session_interface = MySessionInterface()
+
+ Multiple requests with the same session may be sent and handled
+ concurrently. When implementing a new session interface, consider
+ whether reads or writes to the backing store must be synchronized.
+ There is no guarantee on the order in which the session for each
+ request is opened or saved, it will occur in the order that requests
+ begin and end processing.
+
+ .. versionadded:: 0.8
+ """
+
+ #: :meth:`make_null_session` will look here for the class that should
+ #: be created when a null session is requested. Likewise the
+ #: :meth:`is_null_session` method will perform a typecheck against
+ #: this type.
+ null_session_class = NullSession
+
+ #: A flag that indicates if the session interface is pickle based.
+ #: This can be used by Flask extensions to make a decision in regards
+ #: to how to deal with the session object.
+ #:
+ #: .. versionadded:: 0.10
+ pickle_based = False
+
+ def make_null_session(self, app: Flask) -> NullSession:
+ """Creates a null session which acts as a replacement object if the
+ real session support could not be loaded due to a configuration
+ error. This mainly aids the user experience because the job of the
+ null session is to still support lookup without complaining but
+ modifications are answered with a helpful error message of what
+ failed.
+
+ This creates an instance of :attr:`null_session_class` by default.
+ """
+ return self.null_session_class()
+
+ def is_null_session(self, obj: object) -> bool:
+ """Checks if a given object is a null session. Null sessions are
+ not asked to be saved.
+
+ This checks if the object is an instance of :attr:`null_session_class`
+ by default.
+ """
+ return isinstance(obj, self.null_session_class)
+
+ def get_cookie_name(self, app: Flask) -> str:
+ """The name of the session cookie. Uses``app.config["SESSION_COOKIE_NAME"]``."""
+ return app.config["SESSION_COOKIE_NAME"] # type: ignore[no-any-return]
+
+ def get_cookie_domain(self, app: Flask) -> str | None:
+ """The value of the ``Domain`` parameter on the session cookie. If not set,
+ browsers will only send the cookie to the exact domain it was set from.
+ Otherwise, they will send it to any subdomain of the given value as well.
+
+ Uses the :data:`SESSION_COOKIE_DOMAIN` config.
+
+ .. versionchanged:: 2.3
+ Not set by default, does not fall back to ``SERVER_NAME``.
+ """
+ return app.config["SESSION_COOKIE_DOMAIN"] # type: ignore[no-any-return]
+
+ def get_cookie_path(self, app: Flask) -> str:
+ """Returns the path for which the cookie should be valid. The
+ default implementation uses the value from the ``SESSION_COOKIE_PATH``
+ config var if it's set, and falls back to ``APPLICATION_ROOT`` or
+ uses ``/`` if it's ``None``.
+ """
+ return app.config["SESSION_COOKIE_PATH"] or app.config["APPLICATION_ROOT"] # type: ignore[no-any-return]
+
+ def get_cookie_httponly(self, app: Flask) -> bool:
+ """Returns True if the session cookie should be httponly. This
+ currently just returns the value of the ``SESSION_COOKIE_HTTPONLY``
+ config var.
+ """
+ return app.config["SESSION_COOKIE_HTTPONLY"] # type: ignore[no-any-return]
+
+ def get_cookie_secure(self, app: Flask) -> bool:
+ """Returns True if the cookie should be secure. This currently
+ just returns the value of the ``SESSION_COOKIE_SECURE`` setting.
+ """
+ return app.config["SESSION_COOKIE_SECURE"] # type: ignore[no-any-return]
+
+ def get_cookie_samesite(self, app: Flask) -> str | None:
+ """Return ``'Strict'`` or ``'Lax'`` if the cookie should use the
+ ``SameSite`` attribute. This currently just returns the value of
+ the :data:`SESSION_COOKIE_SAMESITE` setting.
+ """
+ return app.config["SESSION_COOKIE_SAMESITE"] # type: ignore[no-any-return]
+
+ def get_expiration_time(self, app: Flask, session: SessionMixin) -> datetime | None:
+ """A helper method that returns an expiration date for the session
+ or ``None`` if the session is linked to the browser session. The
+ default implementation returns now + the permanent session
+ lifetime configured on the application.
+ """
+ if session.permanent:
+ return datetime.now(timezone.utc) + app.permanent_session_lifetime
+ return None
+
+ def should_set_cookie(self, app: Flask, session: SessionMixin) -> bool:
+ """Used by session backends to determine if a ``Set-Cookie`` header
+ should be set for this session cookie for this response. If the session
+ has been modified, the cookie is set. If the session is permanent and
+ the ``SESSION_REFRESH_EACH_REQUEST`` config is true, the cookie is
+ always set.
+
+ This check is usually skipped if the session was deleted.
+
+ .. versionadded:: 0.11
+ """
+
+ return session.modified or (
+ session.permanent and app.config["SESSION_REFRESH_EACH_REQUEST"]
+ )
+
+ def open_session(self, app: Flask, request: Request) -> SessionMixin | None:
+ """This is called at the beginning of each request, after
+ pushing the request context, before matching the URL.
+
+ This must return an object which implements a dictionary-like
+ interface as well as the :class:`SessionMixin` interface.
+
+ This will return ``None`` to indicate that loading failed in
+ some way that is not immediately an error. The request
+ context will fall back to using :meth:`make_null_session`
+ in this case.
+ """
+ raise NotImplementedError()
+
+ def save_session(
+ self, app: Flask, session: SessionMixin, response: Response
+ ) -> None:
+ """This is called at the end of each request, after generating
+ a response, before removing the request context. It is skipped
+ if :meth:`is_null_session` returns ``True``.
+ """
+ raise NotImplementedError()
+
+
+session_json_serializer = TaggedJSONSerializer()
+
+
+def _lazy_sha1(string: bytes = b"") -> t.Any:
+ """Don't access ``hashlib.sha1`` until runtime. FIPS builds may not include
+ SHA-1, in which case the import and use as a default would fail before the
+ developer can configure something else.
+ """
+ return hashlib.sha1(string)
+
+
+class SecureCookieSessionInterface(SessionInterface):
+ """The default session interface that stores sessions in signed cookies
+ through the :mod:`itsdangerous` module.
+ """
+
+ #: the salt that should be applied on top of the secret key for the
+ #: signing of cookie based sessions.
+ salt = "cookie-session"
+ #: the hash function to use for the signature. The default is sha1
+ digest_method = staticmethod(_lazy_sha1)
+ #: the name of the itsdangerous supported key derivation. The default
+ #: is hmac.
+ key_derivation = "hmac"
+ #: A python serializer for the payload. The default is a compact
+ #: JSON derived serializer with support for some extra Python types
+ #: such as datetime objects or tuples.
+ serializer = session_json_serializer
+ session_class = SecureCookieSession
+
+ def get_signing_serializer(self, app: Flask) -> URLSafeTimedSerializer | None:
+ if not app.secret_key:
+ return None
+ signer_kwargs = dict(
+ key_derivation=self.key_derivation, digest_method=self.digest_method
+ )
+ return URLSafeTimedSerializer(
+ app.secret_key,
+ salt=self.salt,
+ serializer=self.serializer,
+ signer_kwargs=signer_kwargs,
+ )
+
+ def open_session(self, app: Flask, request: Request) -> SecureCookieSession | None:
+ s = self.get_signing_serializer(app)
+ if s is None:
+ return None
+ val = request.cookies.get(self.get_cookie_name(app))
+ if not val:
+ return self.session_class()
+ max_age = int(app.permanent_session_lifetime.total_seconds())
+ try:
+ data = s.loads(val, max_age=max_age)
+ return self.session_class(data)
+ except BadSignature:
+ return self.session_class()
+
+ def save_session(
+ self, app: Flask, session: SessionMixin, response: Response
+ ) -> None:
+ name = self.get_cookie_name(app)
+ domain = self.get_cookie_domain(app)
+ path = self.get_cookie_path(app)
+ secure = self.get_cookie_secure(app)
+ samesite = self.get_cookie_samesite(app)
+ httponly = self.get_cookie_httponly(app)
+
+ # Add a "Vary: Cookie" header if the session was accessed at all.
+ if session.accessed:
+ response.vary.add("Cookie")
+
+ # If the session is modified to be empty, remove the cookie.
+ # If the session is empty, return without setting the cookie.
+ if not session:
+ if session.modified:
+ response.delete_cookie(
+ name,
+ domain=domain,
+ path=path,
+ secure=secure,
+ samesite=samesite,
+ httponly=httponly,
+ )
+ response.vary.add("Cookie")
+
+ return
+
+ if not self.should_set_cookie(app, session):
+ return
+
+ expires = self.get_expiration_time(app, session)
+ val = self.get_signing_serializer(app).dumps(dict(session)) # type: ignore
+ response.set_cookie(
+ name,
+ val, # type: ignore
+ expires=expires,
+ httponly=httponly,
+ domain=domain,
+ path=path,
+ secure=secure,
+ samesite=samesite,
+ )
+ response.vary.add("Cookie")
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/signals.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/signals.py
new file mode 100644
index 0000000..444fda9
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/signals.py
@@ -0,0 +1,17 @@
+from __future__ import annotations
+
+from blinker import Namespace
+
+# This namespace is only for signals provided by Flask itself.
+_signals = Namespace()
+
+template_rendered = _signals.signal("template-rendered")
+before_render_template = _signals.signal("before-render-template")
+request_started = _signals.signal("request-started")
+request_finished = _signals.signal("request-finished")
+request_tearing_down = _signals.signal("request-tearing-down")
+got_request_exception = _signals.signal("got-request-exception")
+appcontext_tearing_down = _signals.signal("appcontext-tearing-down")
+appcontext_pushed = _signals.signal("appcontext-pushed")
+appcontext_popped = _signals.signal("appcontext-popped")
+message_flashed = _signals.signal("message-flashed")
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/templating.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/templating.py
new file mode 100644
index 0000000..618a3b3
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/templating.py
@@ -0,0 +1,219 @@
+from __future__ import annotations
+
+import typing as t
+
+from jinja2 import BaseLoader
+from jinja2 import Environment as BaseEnvironment
+from jinja2 import Template
+from jinja2 import TemplateNotFound
+
+from .globals import _cv_app
+from .globals import _cv_request
+from .globals import current_app
+from .globals import request
+from .helpers import stream_with_context
+from .signals import before_render_template
+from .signals import template_rendered
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from .app import Flask
+ from .sansio.app import App
+ from .sansio.scaffold import Scaffold
+
+
+def _default_template_ctx_processor() -> dict[str, t.Any]:
+ """Default template context processor. Injects `request`,
+ `session` and `g`.
+ """
+ appctx = _cv_app.get(None)
+ reqctx = _cv_request.get(None)
+ rv: dict[str, t.Any] = {}
+ if appctx is not None:
+ rv["g"] = appctx.g
+ if reqctx is not None:
+ rv["request"] = reqctx.request
+ rv["session"] = reqctx.session
+ return rv
+
+
+class Environment(BaseEnvironment):
+ """Works like a regular Jinja2 environment but has some additional
+ knowledge of how Flask's blueprint works so that it can prepend the
+ name of the blueprint to referenced templates if necessary.
+ """
+
+ def __init__(self, app: App, **options: t.Any) -> None:
+ if "loader" not in options:
+ options["loader"] = app.create_global_jinja_loader()
+ BaseEnvironment.__init__(self, **options)
+ self.app = app
+
+
+class DispatchingJinjaLoader(BaseLoader):
+ """A loader that looks for templates in the application and all
+ the blueprint folders.
+ """
+
+ def __init__(self, app: App) -> None:
+ self.app = app
+
+ def get_source(
+ self, environment: BaseEnvironment, template: str
+ ) -> tuple[str, str | None, t.Callable[[], bool] | None]:
+ if self.app.config["EXPLAIN_TEMPLATE_LOADING"]:
+ return self._get_source_explained(environment, template)
+ return self._get_source_fast(environment, template)
+
+ def _get_source_explained(
+ self, environment: BaseEnvironment, template: str
+ ) -> tuple[str, str | None, t.Callable[[], bool] | None]:
+ attempts = []
+ rv: tuple[str, str | None, t.Callable[[], bool] | None] | None
+ trv: None | (tuple[str, str | None, t.Callable[[], bool] | None]) = None
+
+ for srcobj, loader in self._iter_loaders(template):
+ try:
+ rv = loader.get_source(environment, template)
+ if trv is None:
+ trv = rv
+ except TemplateNotFound:
+ rv = None
+ attempts.append((loader, srcobj, rv))
+
+ from .debughelpers import explain_template_loading_attempts
+
+ explain_template_loading_attempts(self.app, template, attempts)
+
+ if trv is not None:
+ return trv
+ raise TemplateNotFound(template)
+
+ def _get_source_fast(
+ self, environment: BaseEnvironment, template: str
+ ) -> tuple[str, str | None, t.Callable[[], bool] | None]:
+ for _srcobj, loader in self._iter_loaders(template):
+ try:
+ return loader.get_source(environment, template)
+ except TemplateNotFound:
+ continue
+ raise TemplateNotFound(template)
+
+ def _iter_loaders(self, template: str) -> t.Iterator[tuple[Scaffold, BaseLoader]]:
+ loader = self.app.jinja_loader
+ if loader is not None:
+ yield self.app, loader
+
+ for blueprint in self.app.iter_blueprints():
+ loader = blueprint.jinja_loader
+ if loader is not None:
+ yield blueprint, loader
+
+ def list_templates(self) -> list[str]:
+ result = set()
+ loader = self.app.jinja_loader
+ if loader is not None:
+ result.update(loader.list_templates())
+
+ for blueprint in self.app.iter_blueprints():
+ loader = blueprint.jinja_loader
+ if loader is not None:
+ for template in loader.list_templates():
+ result.add(template)
+
+ return list(result)
+
+
+def _render(app: Flask, template: Template, context: dict[str, t.Any]) -> str:
+ app.update_template_context(context)
+ before_render_template.send(
+ app, _async_wrapper=app.ensure_sync, template=template, context=context
+ )
+ rv = template.render(context)
+ template_rendered.send(
+ app, _async_wrapper=app.ensure_sync, template=template, context=context
+ )
+ return rv
+
+
+def render_template(
+ template_name_or_list: str | Template | list[str | Template],
+ **context: t.Any,
+) -> str:
+ """Render a template by name with the given context.
+
+ :param template_name_or_list: The name of the template to render. If
+ a list is given, the first name to exist will be rendered.
+ :param context: The variables to make available in the template.
+ """
+ app = current_app._get_current_object() # type: ignore[attr-defined]
+ template = app.jinja_env.get_or_select_template(template_name_or_list)
+ return _render(app, template, context)
+
+
+def render_template_string(source: str, **context: t.Any) -> str:
+ """Render a template from the given source string with the given
+ context.
+
+ :param source: The source code of the template to render.
+ :param context: The variables to make available in the template.
+ """
+ app = current_app._get_current_object() # type: ignore[attr-defined]
+ template = app.jinja_env.from_string(source)
+ return _render(app, template, context)
+
+
+def _stream(
+ app: Flask, template: Template, context: dict[str, t.Any]
+) -> t.Iterator[str]:
+ app.update_template_context(context)
+ before_render_template.send(
+ app, _async_wrapper=app.ensure_sync, template=template, context=context
+ )
+
+ def generate() -> t.Iterator[str]:
+ yield from template.generate(context)
+ template_rendered.send(
+ app, _async_wrapper=app.ensure_sync, template=template, context=context
+ )
+
+ rv = generate()
+
+ # If a request context is active, keep it while generating.
+ if request:
+ rv = stream_with_context(rv)
+
+ return rv
+
+
+def stream_template(
+ template_name_or_list: str | Template | list[str | Template],
+ **context: t.Any,
+) -> t.Iterator[str]:
+ """Render a template by name with the given context as a stream.
+ This returns an iterator of strings, which can be used as a
+ streaming response from a view.
+
+ :param template_name_or_list: The name of the template to render. If
+ a list is given, the first name to exist will be rendered.
+ :param context: The variables to make available in the template.
+
+ .. versionadded:: 2.2
+ """
+ app = current_app._get_current_object() # type: ignore[attr-defined]
+ template = app.jinja_env.get_or_select_template(template_name_or_list)
+ return _stream(app, template, context)
+
+
+def stream_template_string(source: str, **context: t.Any) -> t.Iterator[str]:
+ """Render a template from the given source string with the given
+ context as a stream. This returns an iterator of strings, which can
+ be used as a streaming response from a view.
+
+ :param source: The source code of the template to render.
+ :param context: The variables to make available in the template.
+
+ .. versionadded:: 2.2
+ """
+ app = current_app._get_current_object() # type: ignore[attr-defined]
+ template = app.jinja_env.from_string(source)
+ return _stream(app, template, context)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/testing.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/testing.py
new file mode 100644
index 0000000..a27b7c8
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/testing.py
@@ -0,0 +1,298 @@
+from __future__ import annotations
+
+import importlib.metadata
+import typing as t
+from contextlib import contextmanager
+from contextlib import ExitStack
+from copy import copy
+from types import TracebackType
+from urllib.parse import urlsplit
+
+import werkzeug.test
+from click.testing import CliRunner
+from werkzeug.test import Client
+from werkzeug.wrappers import Request as BaseRequest
+
+from .cli import ScriptInfo
+from .sessions import SessionMixin
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from _typeshed.wsgi import WSGIEnvironment
+ from werkzeug.test import TestResponse
+
+ from .app import Flask
+
+
+class EnvironBuilder(werkzeug.test.EnvironBuilder):
+ """An :class:`~werkzeug.test.EnvironBuilder`, that takes defaults from the
+ application.
+
+ :param app: The Flask application to configure the environment from.
+ :param path: URL path being requested.
+ :param base_url: Base URL where the app is being served, which
+ ``path`` is relative to. If not given, built from
+ :data:`PREFERRED_URL_SCHEME`, ``subdomain``,
+ :data:`SERVER_NAME`, and :data:`APPLICATION_ROOT`.
+ :param subdomain: Subdomain name to append to :data:`SERVER_NAME`.
+ :param url_scheme: Scheme to use instead of
+ :data:`PREFERRED_URL_SCHEME`.
+ :param json: If given, this is serialized as JSON and passed as
+ ``data``. Also defaults ``content_type`` to
+ ``application/json``.
+ :param args: other positional arguments passed to
+ :class:`~werkzeug.test.EnvironBuilder`.
+ :param kwargs: other keyword arguments passed to
+ :class:`~werkzeug.test.EnvironBuilder`.
+ """
+
+ def __init__(
+ self,
+ app: Flask,
+ path: str = "/",
+ base_url: str | None = None,
+ subdomain: str | None = None,
+ url_scheme: str | None = None,
+ *args: t.Any,
+ **kwargs: t.Any,
+ ) -> None:
+ assert not (base_url or subdomain or url_scheme) or (
+ base_url is not None
+ ) != bool(
+ subdomain or url_scheme
+ ), 'Cannot pass "subdomain" or "url_scheme" with "base_url".'
+
+ if base_url is None:
+ http_host = app.config.get("SERVER_NAME") or "localhost"
+ app_root = app.config["APPLICATION_ROOT"]
+
+ if subdomain:
+ http_host = f"{subdomain}.{http_host}"
+
+ if url_scheme is None:
+ url_scheme = app.config["PREFERRED_URL_SCHEME"]
+
+ url = urlsplit(path)
+ base_url = (
+ f"{url.scheme or url_scheme}://{url.netloc or http_host}"
+ f"/{app_root.lstrip('/')}"
+ )
+ path = url.path
+
+ if url.query:
+ sep = b"?" if isinstance(url.query, bytes) else "?"
+ path += sep + url.query
+
+ self.app = app
+ super().__init__(path, base_url, *args, **kwargs)
+
+ def json_dumps(self, obj: t.Any, **kwargs: t.Any) -> str: # type: ignore
+ """Serialize ``obj`` to a JSON-formatted string.
+
+ The serialization will be configured according to the config associated
+ with this EnvironBuilder's ``app``.
+ """
+ return self.app.json.dumps(obj, **kwargs)
+
+
+_werkzeug_version = ""
+
+
+def _get_werkzeug_version() -> str:
+ global _werkzeug_version
+
+ if not _werkzeug_version:
+ _werkzeug_version = importlib.metadata.version("werkzeug")
+
+ return _werkzeug_version
+
+
+class FlaskClient(Client):
+ """Works like a regular Werkzeug test client but has knowledge about
+ Flask's contexts to defer the cleanup of the request context until
+ the end of a ``with`` block. For general information about how to
+ use this class refer to :class:`werkzeug.test.Client`.
+
+ .. versionchanged:: 0.12
+ `app.test_client()` includes preset default environment, which can be
+ set after instantiation of the `app.test_client()` object in
+ `client.environ_base`.
+
+ Basic usage is outlined in the :doc:`/testing` chapter.
+ """
+
+ application: Flask
+
+ def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
+ super().__init__(*args, **kwargs)
+ self.preserve_context = False
+ self._new_contexts: list[t.ContextManager[t.Any]] = []
+ self._context_stack = ExitStack()
+ self.environ_base = {
+ "REMOTE_ADDR": "127.0.0.1",
+ "HTTP_USER_AGENT": f"Werkzeug/{_get_werkzeug_version()}",
+ }
+
+ @contextmanager
+ def session_transaction(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.Iterator[SessionMixin]:
+ """When used in combination with a ``with`` statement this opens a
+ session transaction. This can be used to modify the session that
+ the test client uses. Once the ``with`` block is left the session is
+ stored back.
+
+ ::
+
+ with client.session_transaction() as session:
+ session['value'] = 42
+
+ Internally this is implemented by going through a temporary test
+ request context and since session handling could depend on
+ request variables this function accepts the same arguments as
+ :meth:`~flask.Flask.test_request_context` which are directly
+ passed through.
+ """
+ if self._cookies is None:
+ raise TypeError(
+ "Cookies are disabled. Create a client with 'use_cookies=True'."
+ )
+
+ app = self.application
+ ctx = app.test_request_context(*args, **kwargs)
+ self._add_cookies_to_wsgi(ctx.request.environ)
+
+ with ctx:
+ sess = app.session_interface.open_session(app, ctx.request)
+
+ if sess is None:
+ raise RuntimeError("Session backend did not open a session.")
+
+ yield sess
+ resp = app.response_class()
+
+ if app.session_interface.is_null_session(sess):
+ return
+
+ with ctx:
+ app.session_interface.save_session(app, sess, resp)
+
+ self._update_cookies_from_response(
+ ctx.request.host.partition(":")[0],
+ ctx.request.path,
+ resp.headers.getlist("Set-Cookie"),
+ )
+
+ def _copy_environ(self, other: WSGIEnvironment) -> WSGIEnvironment:
+ out = {**self.environ_base, **other}
+
+ if self.preserve_context:
+ out["werkzeug.debug.preserve_context"] = self._new_contexts.append
+
+ return out
+
+ def _request_from_builder_args(
+ self, args: tuple[t.Any, ...], kwargs: dict[str, t.Any]
+ ) -> BaseRequest:
+ kwargs["environ_base"] = self._copy_environ(kwargs.get("environ_base", {}))
+ builder = EnvironBuilder(self.application, *args, **kwargs)
+
+ try:
+ return builder.get_request()
+ finally:
+ builder.close()
+
+ def open(
+ self,
+ *args: t.Any,
+ buffered: bool = False,
+ follow_redirects: bool = False,
+ **kwargs: t.Any,
+ ) -> TestResponse:
+ if args and isinstance(
+ args[0], (werkzeug.test.EnvironBuilder, dict, BaseRequest)
+ ):
+ if isinstance(args[0], werkzeug.test.EnvironBuilder):
+ builder = copy(args[0])
+ builder.environ_base = self._copy_environ(builder.environ_base or {}) # type: ignore[arg-type]
+ request = builder.get_request()
+ elif isinstance(args[0], dict):
+ request = EnvironBuilder.from_environ(
+ args[0], app=self.application, environ_base=self._copy_environ({})
+ ).get_request()
+ else:
+ # isinstance(args[0], BaseRequest)
+ request = copy(args[0])
+ request.environ = self._copy_environ(request.environ)
+ else:
+ # request is None
+ request = self._request_from_builder_args(args, kwargs)
+
+ # Pop any previously preserved contexts. This prevents contexts
+ # from being preserved across redirects or multiple requests
+ # within a single block.
+ self._context_stack.close()
+
+ response = super().open(
+ request,
+ buffered=buffered,
+ follow_redirects=follow_redirects,
+ )
+ response.json_module = self.application.json # type: ignore[assignment]
+
+ # Re-push contexts that were preserved during the request.
+ while self._new_contexts:
+ cm = self._new_contexts.pop()
+ self._context_stack.enter_context(cm)
+
+ return response
+
+ def __enter__(self) -> FlaskClient:
+ if self.preserve_context:
+ raise RuntimeError("Cannot nest client invocations")
+ self.preserve_context = True
+ return self
+
+ def __exit__(
+ self,
+ exc_type: type | None,
+ exc_value: BaseException | None,
+ tb: TracebackType | None,
+ ) -> None:
+ self.preserve_context = False
+ self._context_stack.close()
+
+
+class FlaskCliRunner(CliRunner):
+ """A :class:`~click.testing.CliRunner` for testing a Flask app's
+ CLI commands. Typically created using
+ :meth:`~flask.Flask.test_cli_runner`. See :ref:`testing-cli`.
+ """
+
+ def __init__(self, app: Flask, **kwargs: t.Any) -> None:
+ self.app = app
+ super().__init__(**kwargs)
+
+ def invoke( # type: ignore
+ self, cli: t.Any = None, args: t.Any = None, **kwargs: t.Any
+ ) -> t.Any:
+ """Invokes a CLI command in an isolated environment. See
+ :meth:`CliRunner.invoke ` for
+ full method documentation. See :ref:`testing-cli` for examples.
+
+ If the ``obj`` argument is not given, passes an instance of
+ :class:`~flask.cli.ScriptInfo` that knows how to load the Flask
+ app being tested.
+
+ :param cli: Command object to invoke. Default is the app's
+ :attr:`~flask.app.Flask.cli` group.
+ :param args: List of strings to invoke the command with.
+
+ :return: a :class:`~click.testing.Result` object.
+ """
+ if cli is None:
+ cli = self.app.cli
+
+ if "obj" not in kwargs:
+ kwargs["obj"] = ScriptInfo(create_app=lambda: self.app)
+
+ return super().invoke(cli, args, **kwargs)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/typing.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/typing.py
new file mode 100644
index 0000000..cf6d4ae
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/typing.py
@@ -0,0 +1,90 @@
+from __future__ import annotations
+
+import typing as t
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from _typeshed.wsgi import WSGIApplication # noqa: F401
+ from werkzeug.datastructures import Headers # noqa: F401
+ from werkzeug.sansio.response import Response # noqa: F401
+
+# The possible types that are directly convertible or are a Response object.
+ResponseValue = t.Union[
+ "Response",
+ str,
+ bytes,
+ t.List[t.Any],
+ # Only dict is actually accepted, but Mapping allows for TypedDict.
+ t.Mapping[str, t.Any],
+ t.Iterator[str],
+ t.Iterator[bytes],
+]
+
+# the possible types for an individual HTTP header
+# This should be a Union, but mypy doesn't pass unless it's a TypeVar.
+HeaderValue = t.Union[str, t.List[str], t.Tuple[str, ...]]
+
+# the possible types for HTTP headers
+HeadersValue = t.Union[
+ "Headers",
+ t.Mapping[str, HeaderValue],
+ t.Sequence[t.Tuple[str, HeaderValue]],
+]
+
+# The possible types returned by a route function.
+ResponseReturnValue = t.Union[
+ ResponseValue,
+ t.Tuple[ResponseValue, HeadersValue],
+ t.Tuple[ResponseValue, int],
+ t.Tuple[ResponseValue, int, HeadersValue],
+ "WSGIApplication",
+]
+
+# Allow any subclass of werkzeug.Response, such as the one from Flask,
+# as a callback argument. Using werkzeug.Response directly makes a
+# callback annotated with flask.Response fail type checking.
+ResponseClass = t.TypeVar("ResponseClass", bound="Response")
+
+AppOrBlueprintKey = t.Optional[str] # The App key is None, whereas blueprints are named
+AfterRequestCallable = t.Union[
+ t.Callable[[ResponseClass], ResponseClass],
+ t.Callable[[ResponseClass], t.Awaitable[ResponseClass]],
+]
+BeforeFirstRequestCallable = t.Union[
+ t.Callable[[], None], t.Callable[[], t.Awaitable[None]]
+]
+BeforeRequestCallable = t.Union[
+ t.Callable[[], t.Optional[ResponseReturnValue]],
+ t.Callable[[], t.Awaitable[t.Optional[ResponseReturnValue]]],
+]
+ShellContextProcessorCallable = t.Callable[[], t.Dict[str, t.Any]]
+TeardownCallable = t.Union[
+ t.Callable[[t.Optional[BaseException]], None],
+ t.Callable[[t.Optional[BaseException]], t.Awaitable[None]],
+]
+TemplateContextProcessorCallable = t.Union[
+ t.Callable[[], t.Dict[str, t.Any]],
+ t.Callable[[], t.Awaitable[t.Dict[str, t.Any]]],
+]
+TemplateFilterCallable = t.Callable[..., t.Any]
+TemplateGlobalCallable = t.Callable[..., t.Any]
+TemplateTestCallable = t.Callable[..., bool]
+URLDefaultCallable = t.Callable[[str, t.Dict[str, t.Any]], None]
+URLValuePreprocessorCallable = t.Callable[
+ [t.Optional[str], t.Optional[t.Dict[str, t.Any]]], None
+]
+
+# This should take Exception, but that either breaks typing the argument
+# with a specific exception, or decorating multiple times with different
+# exceptions (and using a union type on the argument).
+# https://github.com/pallets/flask/issues/4095
+# https://github.com/pallets/flask/issues/4295
+# https://github.com/pallets/flask/issues/4297
+ErrorHandlerCallable = t.Union[
+ t.Callable[[t.Any], ResponseReturnValue],
+ t.Callable[[t.Any], t.Awaitable[ResponseReturnValue]],
+]
+
+RouteCallable = t.Union[
+ t.Callable[..., ResponseReturnValue],
+ t.Callable[..., t.Awaitable[ResponseReturnValue]],
+]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/views.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/views.py
new file mode 100644
index 0000000..794fdc0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/views.py
@@ -0,0 +1,191 @@
+from __future__ import annotations
+
+import typing as t
+
+from . import typing as ft
+from .globals import current_app
+from .globals import request
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+
+http_method_funcs = frozenset(
+ ["get", "post", "head", "options", "delete", "put", "trace", "patch"]
+)
+
+
+class View:
+ """Subclass this class and override :meth:`dispatch_request` to
+ create a generic class-based view. Call :meth:`as_view` to create a
+ view function that creates an instance of the class with the given
+ arguments and calls its ``dispatch_request`` method with any URL
+ variables.
+
+ See :doc:`views` for a detailed guide.
+
+ .. code-block:: python
+
+ class Hello(View):
+ init_every_request = False
+
+ def dispatch_request(self, name):
+ return f"Hello, {name}!"
+
+ app.add_url_rule(
+ "/hello/", view_func=Hello.as_view("hello")
+ )
+
+ Set :attr:`methods` on the class to change what methods the view
+ accepts.
+
+ Set :attr:`decorators` on the class to apply a list of decorators to
+ the generated view function. Decorators applied to the class itself
+ will not be applied to the generated view function!
+
+ Set :attr:`init_every_request` to ``False`` for efficiency, unless
+ you need to store request-global data on ``self``.
+ """
+
+ #: The methods this view is registered for. Uses the same default
+ #: (``["GET", "HEAD", "OPTIONS"]``) as ``route`` and
+ #: ``add_url_rule`` by default.
+ methods: t.ClassVar[t.Collection[str] | None] = None
+
+ #: Control whether the ``OPTIONS`` method is handled automatically.
+ #: Uses the same default (``True``) as ``route`` and
+ #: ``add_url_rule`` by default.
+ provide_automatic_options: t.ClassVar[bool | None] = None
+
+ #: A list of decorators to apply, in order, to the generated view
+ #: function. Remember that ``@decorator`` syntax is applied bottom
+ #: to top, so the first decorator in the list would be the bottom
+ #: decorator.
+ #:
+ #: .. versionadded:: 0.8
+ decorators: t.ClassVar[list[t.Callable[[F], F]]] = []
+
+ #: Create a new instance of this view class for every request by
+ #: default. If a view subclass sets this to ``False``, the same
+ #: instance is used for every request.
+ #:
+ #: A single instance is more efficient, especially if complex setup
+ #: is done during init. However, storing data on ``self`` is no
+ #: longer safe across requests, and :data:`~flask.g` should be used
+ #: instead.
+ #:
+ #: .. versionadded:: 2.2
+ init_every_request: t.ClassVar[bool] = True
+
+ def dispatch_request(self) -> ft.ResponseReturnValue:
+ """The actual view function behavior. Subclasses must override
+ this and return a valid response. Any variables from the URL
+ rule are passed as keyword arguments.
+ """
+ raise NotImplementedError()
+
+ @classmethod
+ def as_view(
+ cls, name: str, *class_args: t.Any, **class_kwargs: t.Any
+ ) -> ft.RouteCallable:
+ """Convert the class into a view function that can be registered
+ for a route.
+
+ By default, the generated view will create a new instance of the
+ view class for every request and call its
+ :meth:`dispatch_request` method. If the view class sets
+ :attr:`init_every_request` to ``False``, the same instance will
+ be used for every request.
+
+ Except for ``name``, all other arguments passed to this method
+ are forwarded to the view class ``__init__`` method.
+
+ .. versionchanged:: 2.2
+ Added the ``init_every_request`` class attribute.
+ """
+ if cls.init_every_request:
+
+ def view(**kwargs: t.Any) -> ft.ResponseReturnValue:
+ self = view.view_class( # type: ignore[attr-defined]
+ *class_args, **class_kwargs
+ )
+ return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return]
+
+ else:
+ self = cls(*class_args, **class_kwargs)
+
+ def view(**kwargs: t.Any) -> ft.ResponseReturnValue:
+ return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return]
+
+ if cls.decorators:
+ view.__name__ = name
+ view.__module__ = cls.__module__
+ for decorator in cls.decorators:
+ view = decorator(view)
+
+ # We attach the view class to the view function for two reasons:
+ # first of all it allows us to easily figure out what class-based
+ # view this thing came from, secondly it's also used for instantiating
+ # the view class so you can actually replace it with something else
+ # for testing purposes and debugging.
+ view.view_class = cls # type: ignore
+ view.__name__ = name
+ view.__doc__ = cls.__doc__
+ view.__module__ = cls.__module__
+ view.methods = cls.methods # type: ignore
+ view.provide_automatic_options = cls.provide_automatic_options # type: ignore
+ return view
+
+
+class MethodView(View):
+ """Dispatches request methods to the corresponding instance methods.
+ For example, if you implement a ``get`` method, it will be used to
+ handle ``GET`` requests.
+
+ This can be useful for defining a REST API.
+
+ :attr:`methods` is automatically set based on the methods defined on
+ the class.
+
+ See :doc:`views` for a detailed guide.
+
+ .. code-block:: python
+
+ class CounterAPI(MethodView):
+ def get(self):
+ return str(session.get("counter", 0))
+
+ def post(self):
+ session["counter"] = session.get("counter", 0) + 1
+ return redirect(url_for("counter"))
+
+ app.add_url_rule(
+ "/counter", view_func=CounterAPI.as_view("counter")
+ )
+ """
+
+ def __init_subclass__(cls, **kwargs: t.Any) -> None:
+ super().__init_subclass__(**kwargs)
+
+ if "methods" not in cls.__dict__:
+ methods = set()
+
+ for base in cls.__bases__:
+ if getattr(base, "methods", None):
+ methods.update(base.methods) # type: ignore[attr-defined]
+
+ for key in http_method_funcs:
+ if hasattr(cls, key):
+ methods.add(key.upper())
+
+ if methods:
+ cls.methods = methods
+
+ def dispatch_request(self, **kwargs: t.Any) -> ft.ResponseReturnValue:
+ meth = getattr(self, request.method.lower(), None)
+
+ # If the request method is HEAD and we don't have a handler for it
+ # retry with GET.
+ if meth is None and request.method == "HEAD":
+ meth = getattr(self, "get", None)
+
+ assert meth is not None, f"Unimplemented method {request.method!r}"
+ return current_app.ensure_sync(meth)(**kwargs) # type: ignore[no-any-return]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/flask/wrappers.py b/stripe_config/.venv/lib/python3.12/site-packages/flask/wrappers.py
new file mode 100644
index 0000000..c1eca80
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/flask/wrappers.py
@@ -0,0 +1,174 @@
+from __future__ import annotations
+
+import typing as t
+
+from werkzeug.exceptions import BadRequest
+from werkzeug.exceptions import HTTPException
+from werkzeug.wrappers import Request as RequestBase
+from werkzeug.wrappers import Response as ResponseBase
+
+from . import json
+from .globals import current_app
+from .helpers import _split_blueprint_path
+
+if t.TYPE_CHECKING: # pragma: no cover
+ from werkzeug.routing import Rule
+
+
+class Request(RequestBase):
+ """The request object used by default in Flask. Remembers the
+ matched endpoint and view arguments.
+
+ It is what ends up as :class:`~flask.request`. If you want to replace
+ the request object used you can subclass this and set
+ :attr:`~flask.Flask.request_class` to your subclass.
+
+ The request object is a :class:`~werkzeug.wrappers.Request` subclass and
+ provides all of the attributes Werkzeug defines plus a few Flask
+ specific ones.
+ """
+
+ json_module: t.Any = json
+
+ #: The internal URL rule that matched the request. This can be
+ #: useful to inspect which methods are allowed for the URL from
+ #: a before/after handler (``request.url_rule.methods``) etc.
+ #: Though if the request's method was invalid for the URL rule,
+ #: the valid list is available in ``routing_exception.valid_methods``
+ #: instead (an attribute of the Werkzeug exception
+ #: :exc:`~werkzeug.exceptions.MethodNotAllowed`)
+ #: because the request was never internally bound.
+ #:
+ #: .. versionadded:: 0.6
+ url_rule: Rule | None = None
+
+ #: A dict of view arguments that matched the request. If an exception
+ #: happened when matching, this will be ``None``.
+ view_args: dict[str, t.Any] | None = None
+
+ #: If matching the URL failed, this is the exception that will be
+ #: raised / was raised as part of the request handling. This is
+ #: usually a :exc:`~werkzeug.exceptions.NotFound` exception or
+ #: something similar.
+ routing_exception: HTTPException | None = None
+
+ @property
+ def max_content_length(self) -> int | None: # type: ignore[override]
+ """Read-only view of the ``MAX_CONTENT_LENGTH`` config key."""
+ if current_app:
+ return current_app.config["MAX_CONTENT_LENGTH"] # type: ignore[no-any-return]
+ else:
+ return None
+
+ @property
+ def endpoint(self) -> str | None:
+ """The endpoint that matched the request URL.
+
+ This will be ``None`` if matching failed or has not been
+ performed yet.
+
+ This in combination with :attr:`view_args` can be used to
+ reconstruct the same URL or a modified URL.
+ """
+ if self.url_rule is not None:
+ return self.url_rule.endpoint
+
+ return None
+
+ @property
+ def blueprint(self) -> str | None:
+ """The registered name of the current blueprint.
+
+ This will be ``None`` if the endpoint is not part of a
+ blueprint, or if URL matching failed or has not been performed
+ yet.
+
+ This does not necessarily match the name the blueprint was
+ created with. It may have been nested, or registered with a
+ different name.
+ """
+ endpoint = self.endpoint
+
+ if endpoint is not None and "." in endpoint:
+ return endpoint.rpartition(".")[0]
+
+ return None
+
+ @property
+ def blueprints(self) -> list[str]:
+ """The registered names of the current blueprint upwards through
+ parent blueprints.
+
+ This will be an empty list if there is no current blueprint, or
+ if URL matching failed.
+
+ .. versionadded:: 2.0.1
+ """
+ name = self.blueprint
+
+ if name is None:
+ return []
+
+ return _split_blueprint_path(name)
+
+ def _load_form_data(self) -> None:
+ super()._load_form_data()
+
+ # In debug mode we're replacing the files multidict with an ad-hoc
+ # subclass that raises a different error for key errors.
+ if (
+ current_app
+ and current_app.debug
+ and self.mimetype != "multipart/form-data"
+ and not self.files
+ ):
+ from .debughelpers import attach_enctype_error_multidict
+
+ attach_enctype_error_multidict(self)
+
+ def on_json_loading_failed(self, e: ValueError | None) -> t.Any:
+ try:
+ return super().on_json_loading_failed(e)
+ except BadRequest as e:
+ if current_app and current_app.debug:
+ raise
+
+ raise BadRequest() from e
+
+
+class Response(ResponseBase):
+ """The response object that is used by default in Flask. Works like the
+ response object from Werkzeug but is set to have an HTML mimetype by
+ default. Quite often you don't have to create this object yourself because
+ :meth:`~flask.Flask.make_response` will take care of that for you.
+
+ If you want to replace the response object used you can subclass this and
+ set :attr:`~flask.Flask.response_class` to your subclass.
+
+ .. versionchanged:: 1.0
+ JSON support is added to the response, like the request. This is useful
+ when testing to get the test client response data as JSON.
+
+ .. versionchanged:: 1.0
+
+ Added :attr:`max_cookie_size`.
+ """
+
+ default_mimetype: str | None = "text/html"
+
+ json_module = json
+
+ autocorrect_location_header = False
+
+ @property
+ def max_cookie_size(self) -> int: # type: ignore
+ """Read-only view of the :data:`MAX_COOKIE_SIZE` config key.
+
+ See :attr:`~werkzeug.wrappers.Response.max_cookie_size` in
+ Werkzeug's docs.
+ """
+ if current_app:
+ return current_app.config["MAX_COOKIE_SIZE"] # type: ignore[no-any-return]
+
+ # return Werkzeug's default when not in an app context
+ return super().max_cookie_size
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/INSTALLER b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/INSTALLER
new file mode 100644
index 0000000..a1b589e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/LICENSE.md b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/LICENSE.md
new file mode 100644
index 0000000..19b6b45
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/LICENSE.md
@@ -0,0 +1,31 @@
+BSD 3-Clause License
+
+Copyright (c) 2013-2024, Kim Davies and contributors.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/METADATA b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/METADATA
new file mode 100644
index 0000000..c42623e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/METADATA
@@ -0,0 +1,250 @@
+Metadata-Version: 2.1
+Name: idna
+Version: 3.10
+Summary: Internationalized Domain Names in Applications (IDNA)
+Author-email: Kim Davies
+Requires-Python: >=3.6
+Description-Content-Type: text/x-rst
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3 :: Only
+Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
+Classifier: Programming Language :: Python :: 3.12
+Classifier: Programming Language :: Python :: 3.13
+Classifier: Programming Language :: Python :: Implementation :: CPython
+Classifier: Programming Language :: Python :: Implementation :: PyPy
+Classifier: Topic :: Internet :: Name Service (DNS)
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Utilities
+Requires-Dist: ruff >= 0.6.2 ; extra == "all"
+Requires-Dist: mypy >= 1.11.2 ; extra == "all"
+Requires-Dist: pytest >= 8.3.2 ; extra == "all"
+Requires-Dist: flake8 >= 7.1.1 ; extra == "all"
+Project-URL: Changelog, https://github.com/kjd/idna/blob/master/HISTORY.rst
+Project-URL: Issue tracker, https://github.com/kjd/idna/issues
+Project-URL: Source, https://github.com/kjd/idna
+Provides-Extra: all
+
+Internationalized Domain Names in Applications (IDNA)
+=====================================================
+
+Support for the Internationalized Domain Names in
+Applications (IDNA) protocol as specified in `RFC 5891
+`_. This is the latest version of
+the protocol and is sometimes referred to as “IDNA 2008”.
+
+This library also provides support for Unicode Technical
+Standard 46, `Unicode IDNA Compatibility Processing
+`_.
+
+This acts as a suitable replacement for the “encodings.idna”
+module that comes with the Python standard library, but which
+only supports the older superseded IDNA specification (`RFC 3490
+`_).
+
+Basic functions are simply executed:
+
+.. code-block:: pycon
+
+ >>> import idna
+ >>> idna.encode('ドメイン.テスト')
+ b'xn--eckwd4c7c.xn--zckzah'
+ >>> print(idna.decode('xn--eckwd4c7c.xn--zckzah'))
+ ドメイン.テスト
+
+
+Installation
+------------
+
+This package is available for installation from PyPI:
+
+.. code-block:: bash
+
+ $ python3 -m pip install idna
+
+
+Usage
+-----
+
+For typical usage, the ``encode`` and ``decode`` functions will take a
+domain name argument and perform a conversion to A-labels or U-labels
+respectively.
+
+.. code-block:: pycon
+
+ >>> import idna
+ >>> idna.encode('ドメイン.テスト')
+ b'xn--eckwd4c7c.xn--zckzah'
+ >>> print(idna.decode('xn--eckwd4c7c.xn--zckzah'))
+ ドメイン.テスト
+
+You may use the codec encoding and decoding methods using the
+``idna.codec`` module:
+
+.. code-block:: pycon
+
+ >>> import idna.codec
+ >>> print('домен.испытание'.encode('idna2008'))
+ b'xn--d1acufc.xn--80akhbyknj4f'
+ >>> print(b'xn--d1acufc.xn--80akhbyknj4f'.decode('idna2008'))
+ домен.испытание
+
+Conversions can be applied at a per-label basis using the ``ulabel`` or
+``alabel`` functions if necessary:
+
+.. code-block:: pycon
+
+ >>> idna.alabel('测试')
+ b'xn--0zwm56d'
+
+Compatibility Mapping (UTS #46)
++++++++++++++++++++++++++++++++
+
+As described in `RFC 5895 `_, the
+IDNA specification does not normalize input from different potential
+ways a user may input a domain name. This functionality, known as
+a “mapping”, is considered by the specification to be a local
+user-interface issue distinct from IDNA conversion functionality.
+
+This library provides one such mapping that was developed by the
+Unicode Consortium. Known as `Unicode IDNA Compatibility Processing
+`_, it provides for both a regular
+mapping for typical applications, as well as a transitional mapping to
+help migrate from older IDNA 2003 applications. Strings are
+preprocessed according to Section 4.4 “Preprocessing for IDNA2008”
+prior to the IDNA operations.
+
+For example, “Königsgäßchen” is not a permissible label as *LATIN
+CAPITAL LETTER K* is not allowed (nor are capital letters in general).
+UTS 46 will convert this into lower case prior to applying the IDNA
+conversion.
+
+.. code-block:: pycon
+
+ >>> import idna
+ >>> idna.encode('Königsgäßchen')
+ ...
+ idna.core.InvalidCodepoint: Codepoint U+004B at position 1 of 'Königsgäßchen' not allowed
+ >>> idna.encode('Königsgäßchen', uts46=True)
+ b'xn--knigsgchen-b4a3dun'
+ >>> print(idna.decode('xn--knigsgchen-b4a3dun'))
+ königsgäßchen
+
+Transitional processing provides conversions to help transition from
+the older 2003 standard to the current standard. For example, in the
+original IDNA specification, the *LATIN SMALL LETTER SHARP S* (ß) was
+converted into two *LATIN SMALL LETTER S* (ss), whereas in the current
+IDNA specification this conversion is not performed.
+
+.. code-block:: pycon
+
+ >>> idna.encode('Königsgäßchen', uts46=True, transitional=True)
+ 'xn--knigsgsschen-lcb0w'
+
+Implementers should use transitional processing with caution, only in
+rare cases where conversion from legacy labels to current labels must be
+performed (i.e. IDNA implementations that pre-date 2008). For typical
+applications that just need to convert labels, transitional processing
+is unlikely to be beneficial and could produce unexpected incompatible
+results.
+
+``encodings.idna`` Compatibility
+++++++++++++++++++++++++++++++++
+
+Function calls from the Python built-in ``encodings.idna`` module are
+mapped to their IDNA 2008 equivalents using the ``idna.compat`` module.
+Simply substitute the ``import`` clause in your code to refer to the new
+module name.
+
+Exceptions
+----------
+
+All errors raised during the conversion following the specification
+should raise an exception derived from the ``idna.IDNAError`` base
+class.
+
+More specific exceptions that may be generated as ``idna.IDNABidiError``
+when the error reflects an illegal combination of left-to-right and
+right-to-left characters in a label; ``idna.InvalidCodepoint`` when
+a specific codepoint is an illegal character in an IDN label (i.e.
+INVALID); and ``idna.InvalidCodepointContext`` when the codepoint is
+illegal based on its positional context (i.e. it is CONTEXTO or CONTEXTJ
+but the contextual requirements are not satisfied.)
+
+Building and Diagnostics
+------------------------
+
+The IDNA and UTS 46 functionality relies upon pre-calculated lookup
+tables for performance. These tables are derived from computing against
+eligibility criteria in the respective standards. These tables are
+computed using the command-line script ``tools/idna-data``.
+
+This tool will fetch relevant codepoint data from the Unicode repository
+and perform the required calculations to identify eligibility. There are
+three main modes:
+
+* ``idna-data make-libdata``. Generates ``idnadata.py`` and
+ ``uts46data.py``, the pre-calculated lookup tables used for IDNA and
+ UTS 46 conversions. Implementers who wish to track this library against
+ a different Unicode version may use this tool to manually generate a
+ different version of the ``idnadata.py`` and ``uts46data.py`` files.
+
+* ``idna-data make-table``. Generate a table of the IDNA disposition
+ (e.g. PVALID, CONTEXTJ, CONTEXTO) in the format found in Appendix
+ B.1 of RFC 5892 and the pre-computed tables published by `IANA
+ `_.
+
+* ``idna-data U+0061``. Prints debugging output on the various
+ properties associated with an individual Unicode codepoint (in this
+ case, U+0061), that are used to assess the IDNA and UTS 46 status of a
+ codepoint. This is helpful in debugging or analysis.
+
+The tool accepts a number of arguments, described using ``idna-data
+-h``. Most notably, the ``--version`` argument allows the specification
+of the version of Unicode to be used in computing the table data. For
+example, ``idna-data --version 9.0.0 make-libdata`` will generate
+library data against Unicode 9.0.0.
+
+
+Additional Notes
+----------------
+
+* **Packages**. The latest tagged release version is published in the
+ `Python Package Index `_.
+
+* **Version support**. This library supports Python 3.6 and higher.
+ As this library serves as a low-level toolkit for a variety of
+ applications, many of which strive for broad compatibility with older
+ Python versions, there is no rush to remove older interpreter support.
+ Removing support for older versions should be well justified in that the
+ maintenance burden has become too high.
+
+* **Python 2**. Python 2 is supported by version 2.x of this library.
+ Use "idna<3" in your requirements file if you need this library for
+ a Python 2 application. Be advised that these versions are no longer
+ actively developed.
+
+* **Testing**. The library has a test suite based on each rule of the
+ IDNA specification, as well as tests that are provided as part of the
+ Unicode Technical Standard 46, `Unicode IDNA Compatibility Processing
+ `_.
+
+* **Emoji**. It is an occasional request to support emoji domains in
+ this library. Encoding of symbols like emoji is expressly prohibited by
+ the technical standard IDNA 2008 and emoji domains are broadly phased
+ out across the domain industry due to associated security risks. For
+ now, applications that need to support these non-compliant labels
+ may wish to consider trying the encode/decode operation in this library
+ first, and then falling back to using `encodings.idna`. See `the Github
+ project `_ for more discussion.
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/RECORD
new file mode 100644
index 0000000..9cfce7f
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/RECORD
@@ -0,0 +1,22 @@
+idna-3.10.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+idna-3.10.dist-info/LICENSE.md,sha256=pZ8LDvNjWHQQmkRhykT_enDVBpboFHZ7-vch1Mmw2w8,1541
+idna-3.10.dist-info/METADATA,sha256=URR5ZyDfQ1PCEGhkYoojqfi2Ra0tau2--lhwG4XSfjI,10158
+idna-3.10.dist-info/RECORD,,
+idna-3.10.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81
+idna/__init__.py,sha256=MPqNDLZbXqGaNdXxAFhiqFPKEQXju2jNQhCey6-5eJM,868
+idna/__pycache__/__init__.cpython-312.pyc,,
+idna/__pycache__/codec.cpython-312.pyc,,
+idna/__pycache__/compat.cpython-312.pyc,,
+idna/__pycache__/core.cpython-312.pyc,,
+idna/__pycache__/idnadata.cpython-312.pyc,,
+idna/__pycache__/intranges.cpython-312.pyc,,
+idna/__pycache__/package_data.cpython-312.pyc,,
+idna/__pycache__/uts46data.cpython-312.pyc,,
+idna/codec.py,sha256=PEew3ItwzjW4hymbasnty2N2OXvNcgHB-JjrBuxHPYY,3422
+idna/compat.py,sha256=RzLy6QQCdl9784aFhb2EX9EKGCJjg0P3PilGdeXXcx8,316
+idna/core.py,sha256=YJYyAMnwiQEPjVC4-Fqu_p4CJ6yKKuDGmppBNQNQpFs,13239
+idna/idnadata.py,sha256=W30GcIGvtOWYwAjZj4ZjuouUutC6ffgNuyjJy7fZ-lo,78306
+idna/intranges.py,sha256=amUtkdhYcQG8Zr-CoMM_kVRacxkivC1WgxN1b63KKdU,1898
+idna/package_data.py,sha256=q59S3OXsc5VI8j6vSD0sGBMyk6zZ4vWFREE88yCJYKs,21
+idna/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+idna/uts46data.py,sha256=rt90K9J40gUSwppDPCrhjgi5AA6pWM65dEGRSf6rIhM,239289
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/WHEEL
new file mode 100644
index 0000000..3b5e64b
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna-3.10.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: flit 3.9.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/__init__.py
new file mode 100644
index 0000000..cfdc030
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/__init__.py
@@ -0,0 +1,45 @@
+from .core import (
+ IDNABidiError,
+ IDNAError,
+ InvalidCodepoint,
+ InvalidCodepointContext,
+ alabel,
+ check_bidi,
+ check_hyphen_ok,
+ check_initial_combiner,
+ check_label,
+ check_nfc,
+ decode,
+ encode,
+ ulabel,
+ uts46_remap,
+ valid_contextj,
+ valid_contexto,
+ valid_label_length,
+ valid_string_length,
+)
+from .intranges import intranges_contain
+from .package_data import __version__
+
+__all__ = [
+ "__version__",
+ "IDNABidiError",
+ "IDNAError",
+ "InvalidCodepoint",
+ "InvalidCodepointContext",
+ "alabel",
+ "check_bidi",
+ "check_hyphen_ok",
+ "check_initial_combiner",
+ "check_label",
+ "check_nfc",
+ "decode",
+ "encode",
+ "intranges_contain",
+ "ulabel",
+ "uts46_remap",
+ "valid_contextj",
+ "valid_contexto",
+ "valid_label_length",
+ "valid_string_length",
+]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..bed1ae5
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/codec.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/codec.cpython-312.pyc
new file mode 100644
index 0000000..4387bd8
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/codec.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/compat.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/compat.cpython-312.pyc
new file mode 100644
index 0000000..6ddae83
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/compat.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/core.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/core.cpython-312.pyc
new file mode 100644
index 0000000..ce91964
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/core.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/idnadata.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/idnadata.cpython-312.pyc
new file mode 100644
index 0000000..9f1a801
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/idnadata.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/intranges.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/intranges.cpython-312.pyc
new file mode 100644
index 0000000..e85a586
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/intranges.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/package_data.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/package_data.cpython-312.pyc
new file mode 100644
index 0000000..09211f6
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/package_data.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/uts46data.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/uts46data.cpython-312.pyc
new file mode 100644
index 0000000..12b12a2
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/idna/__pycache__/uts46data.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/codec.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/codec.py
new file mode 100644
index 0000000..913abfd
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/codec.py
@@ -0,0 +1,122 @@
+import codecs
+import re
+from typing import Any, Optional, Tuple
+
+from .core import IDNAError, alabel, decode, encode, ulabel
+
+_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
+
+
+class Codec(codecs.Codec):
+ def encode(self, data: str, errors: str = "strict") -> Tuple[bytes, int]:
+ if errors != "strict":
+ raise IDNAError('Unsupported error handling "{}"'.format(errors))
+
+ if not data:
+ return b"", 0
+
+ return encode(data), len(data)
+
+ def decode(self, data: bytes, errors: str = "strict") -> Tuple[str, int]:
+ if errors != "strict":
+ raise IDNAError('Unsupported error handling "{}"'.format(errors))
+
+ if not data:
+ return "", 0
+
+ return decode(data), len(data)
+
+
+class IncrementalEncoder(codecs.BufferedIncrementalEncoder):
+ def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[bytes, int]:
+ if errors != "strict":
+ raise IDNAError('Unsupported error handling "{}"'.format(errors))
+
+ if not data:
+ return b"", 0
+
+ labels = _unicode_dots_re.split(data)
+ trailing_dot = b""
+ if labels:
+ if not labels[-1]:
+ trailing_dot = b"."
+ del labels[-1]
+ elif not final:
+ # Keep potentially unfinished label until the next call
+ del labels[-1]
+ if labels:
+ trailing_dot = b"."
+
+ result = []
+ size = 0
+ for label in labels:
+ result.append(alabel(label))
+ if size:
+ size += 1
+ size += len(label)
+
+ # Join with U+002E
+ result_bytes = b".".join(result) + trailing_dot
+ size += len(trailing_dot)
+ return result_bytes, size
+
+
+class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
+ def _buffer_decode(self, data: Any, errors: str, final: bool) -> Tuple[str, int]:
+ if errors != "strict":
+ raise IDNAError('Unsupported error handling "{}"'.format(errors))
+
+ if not data:
+ return ("", 0)
+
+ if not isinstance(data, str):
+ data = str(data, "ascii")
+
+ labels = _unicode_dots_re.split(data)
+ trailing_dot = ""
+ if labels:
+ if not labels[-1]:
+ trailing_dot = "."
+ del labels[-1]
+ elif not final:
+ # Keep potentially unfinished label until the next call
+ del labels[-1]
+ if labels:
+ trailing_dot = "."
+
+ result = []
+ size = 0
+ for label in labels:
+ result.append(ulabel(label))
+ if size:
+ size += 1
+ size += len(label)
+
+ result_str = ".".join(result) + trailing_dot
+ size += len(trailing_dot)
+ return (result_str, size)
+
+
+class StreamWriter(Codec, codecs.StreamWriter):
+ pass
+
+
+class StreamReader(Codec, codecs.StreamReader):
+ pass
+
+
+def search_function(name: str) -> Optional[codecs.CodecInfo]:
+ if name != "idna2008":
+ return None
+ return codecs.CodecInfo(
+ name=name,
+ encode=Codec().encode,
+ decode=Codec().decode,
+ incrementalencoder=IncrementalEncoder,
+ incrementaldecoder=IncrementalDecoder,
+ streamwriter=StreamWriter,
+ streamreader=StreamReader,
+ )
+
+
+codecs.register(search_function)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/compat.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/compat.py
new file mode 100644
index 0000000..1df9f2a
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/compat.py
@@ -0,0 +1,15 @@
+from typing import Any, Union
+
+from .core import decode, encode
+
+
+def ToASCII(label: str) -> bytes:
+ return encode(label)
+
+
+def ToUnicode(label: Union[bytes, bytearray]) -> str:
+ return decode(label)
+
+
+def nameprep(s: Any) -> None:
+ raise NotImplementedError("IDNA 2008 does not utilise nameprep protocol")
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/core.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/core.py
new file mode 100644
index 0000000..9115f12
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/core.py
@@ -0,0 +1,437 @@
+import bisect
+import re
+import unicodedata
+from typing import Optional, Union
+
+from . import idnadata
+from .intranges import intranges_contain
+
+_virama_combining_class = 9
+_alabel_prefix = b"xn--"
+_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
+
+
+class IDNAError(UnicodeError):
+ """Base exception for all IDNA-encoding related problems"""
+
+ pass
+
+
+class IDNABidiError(IDNAError):
+ """Exception when bidirectional requirements are not satisfied"""
+
+ pass
+
+
+class InvalidCodepoint(IDNAError):
+ """Exception when a disallowed or unallocated codepoint is used"""
+
+ pass
+
+
+class InvalidCodepointContext(IDNAError):
+ """Exception when the codepoint is not valid in the context it is used"""
+
+ pass
+
+
+def _combining_class(cp: int) -> int:
+ v = unicodedata.combining(chr(cp))
+ if v == 0:
+ if not unicodedata.name(chr(cp)):
+ raise ValueError("Unknown character in unicodedata")
+ return v
+
+
+def _is_script(cp: str, script: str) -> bool:
+ return intranges_contain(ord(cp), idnadata.scripts[script])
+
+
+def _punycode(s: str) -> bytes:
+ return s.encode("punycode")
+
+
+def _unot(s: int) -> str:
+ return "U+{:04X}".format(s)
+
+
+def valid_label_length(label: Union[bytes, str]) -> bool:
+ if len(label) > 63:
+ return False
+ return True
+
+
+def valid_string_length(label: Union[bytes, str], trailing_dot: bool) -> bool:
+ if len(label) > (254 if trailing_dot else 253):
+ return False
+ return True
+
+
+def check_bidi(label: str, check_ltr: bool = False) -> bool:
+ # Bidi rules should only be applied if string contains RTL characters
+ bidi_label = False
+ for idx, cp in enumerate(label, 1):
+ direction = unicodedata.bidirectional(cp)
+ if direction == "":
+ # String likely comes from a newer version of Unicode
+ raise IDNABidiError("Unknown directionality in label {} at position {}".format(repr(label), idx))
+ if direction in ["R", "AL", "AN"]:
+ bidi_label = True
+ if not bidi_label and not check_ltr:
+ return True
+
+ # Bidi rule 1
+ direction = unicodedata.bidirectional(label[0])
+ if direction in ["R", "AL"]:
+ rtl = True
+ elif direction == "L":
+ rtl = False
+ else:
+ raise IDNABidiError("First codepoint in label {} must be directionality L, R or AL".format(repr(label)))
+
+ valid_ending = False
+ number_type: Optional[str] = None
+ for idx, cp in enumerate(label, 1):
+ direction = unicodedata.bidirectional(cp)
+
+ if rtl:
+ # Bidi rule 2
+ if direction not in [
+ "R",
+ "AL",
+ "AN",
+ "EN",
+ "ES",
+ "CS",
+ "ET",
+ "ON",
+ "BN",
+ "NSM",
+ ]:
+ raise IDNABidiError("Invalid direction for codepoint at position {} in a right-to-left label".format(idx))
+ # Bidi rule 3
+ if direction in ["R", "AL", "EN", "AN"]:
+ valid_ending = True
+ elif direction != "NSM":
+ valid_ending = False
+ # Bidi rule 4
+ if direction in ["AN", "EN"]:
+ if not number_type:
+ number_type = direction
+ else:
+ if number_type != direction:
+ raise IDNABidiError("Can not mix numeral types in a right-to-left label")
+ else:
+ # Bidi rule 5
+ if direction not in ["L", "EN", "ES", "CS", "ET", "ON", "BN", "NSM"]:
+ raise IDNABidiError("Invalid direction for codepoint at position {} in a left-to-right label".format(idx))
+ # Bidi rule 6
+ if direction in ["L", "EN"]:
+ valid_ending = True
+ elif direction != "NSM":
+ valid_ending = False
+
+ if not valid_ending:
+ raise IDNABidiError("Label ends with illegal codepoint directionality")
+
+ return True
+
+
+def check_initial_combiner(label: str) -> bool:
+ if unicodedata.category(label[0])[0] == "M":
+ raise IDNAError("Label begins with an illegal combining character")
+ return True
+
+
+def check_hyphen_ok(label: str) -> bool:
+ if label[2:4] == "--":
+ raise IDNAError("Label has disallowed hyphens in 3rd and 4th position")
+ if label[0] == "-" or label[-1] == "-":
+ raise IDNAError("Label must not start or end with a hyphen")
+ return True
+
+
+def check_nfc(label: str) -> None:
+ if unicodedata.normalize("NFC", label) != label:
+ raise IDNAError("Label must be in Normalization Form C")
+
+
+def valid_contextj(label: str, pos: int) -> bool:
+ cp_value = ord(label[pos])
+
+ if cp_value == 0x200C:
+ if pos > 0:
+ if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
+ return True
+
+ ok = False
+ for i in range(pos - 1, -1, -1):
+ joining_type = idnadata.joining_types.get(ord(label[i]))
+ if joining_type == ord("T"):
+ continue
+ elif joining_type in [ord("L"), ord("D")]:
+ ok = True
+ break
+ else:
+ break
+
+ if not ok:
+ return False
+
+ ok = False
+ for i in range(pos + 1, len(label)):
+ joining_type = idnadata.joining_types.get(ord(label[i]))
+ if joining_type == ord("T"):
+ continue
+ elif joining_type in [ord("R"), ord("D")]:
+ ok = True
+ break
+ else:
+ break
+ return ok
+
+ if cp_value == 0x200D:
+ if pos > 0:
+ if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
+ return True
+ return False
+
+ else:
+ return False
+
+
+def valid_contexto(label: str, pos: int, exception: bool = False) -> bool:
+ cp_value = ord(label[pos])
+
+ if cp_value == 0x00B7:
+ if 0 < pos < len(label) - 1:
+ if ord(label[pos - 1]) == 0x006C and ord(label[pos + 1]) == 0x006C:
+ return True
+ return False
+
+ elif cp_value == 0x0375:
+ if pos < len(label) - 1 and len(label) > 1:
+ return _is_script(label[pos + 1], "Greek")
+ return False
+
+ elif cp_value == 0x05F3 or cp_value == 0x05F4:
+ if pos > 0:
+ return _is_script(label[pos - 1], "Hebrew")
+ return False
+
+ elif cp_value == 0x30FB:
+ for cp in label:
+ if cp == "\u30fb":
+ continue
+ if _is_script(cp, "Hiragana") or _is_script(cp, "Katakana") or _is_script(cp, "Han"):
+ return True
+ return False
+
+ elif 0x660 <= cp_value <= 0x669:
+ for cp in label:
+ if 0x6F0 <= ord(cp) <= 0x06F9:
+ return False
+ return True
+
+ elif 0x6F0 <= cp_value <= 0x6F9:
+ for cp in label:
+ if 0x660 <= ord(cp) <= 0x0669:
+ return False
+ return True
+
+ return False
+
+
+def check_label(label: Union[str, bytes, bytearray]) -> None:
+ if isinstance(label, (bytes, bytearray)):
+ label = label.decode("utf-8")
+ if len(label) == 0:
+ raise IDNAError("Empty Label")
+
+ check_nfc(label)
+ check_hyphen_ok(label)
+ check_initial_combiner(label)
+
+ for pos, cp in enumerate(label):
+ cp_value = ord(cp)
+ if intranges_contain(cp_value, idnadata.codepoint_classes["PVALID"]):
+ continue
+ elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTJ"]):
+ try:
+ if not valid_contextj(label, pos):
+ raise InvalidCodepointContext(
+ "Joiner {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
+ )
+ except ValueError:
+ raise IDNAError(
+ "Unknown codepoint adjacent to joiner {} at position {} in {}".format(
+ _unot(cp_value), pos + 1, repr(label)
+ )
+ )
+ elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTO"]):
+ if not valid_contexto(label, pos):
+ raise InvalidCodepointContext(
+ "Codepoint {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
+ )
+ else:
+ raise InvalidCodepoint(
+ "Codepoint {} at position {} of {} not allowed".format(_unot(cp_value), pos + 1, repr(label))
+ )
+
+ check_bidi(label)
+
+
+def alabel(label: str) -> bytes:
+ try:
+ label_bytes = label.encode("ascii")
+ ulabel(label_bytes)
+ if not valid_label_length(label_bytes):
+ raise IDNAError("Label too long")
+ return label_bytes
+ except UnicodeEncodeError:
+ pass
+
+ check_label(label)
+ label_bytes = _alabel_prefix + _punycode(label)
+
+ if not valid_label_length(label_bytes):
+ raise IDNAError("Label too long")
+
+ return label_bytes
+
+
+def ulabel(label: Union[str, bytes, bytearray]) -> str:
+ if not isinstance(label, (bytes, bytearray)):
+ try:
+ label_bytes = label.encode("ascii")
+ except UnicodeEncodeError:
+ check_label(label)
+ return label
+ else:
+ label_bytes = label
+
+ label_bytes = label_bytes.lower()
+ if label_bytes.startswith(_alabel_prefix):
+ label_bytes = label_bytes[len(_alabel_prefix) :]
+ if not label_bytes:
+ raise IDNAError("Malformed A-label, no Punycode eligible content found")
+ if label_bytes.decode("ascii")[-1] == "-":
+ raise IDNAError("A-label must not end with a hyphen")
+ else:
+ check_label(label_bytes)
+ return label_bytes.decode("ascii")
+
+ try:
+ label = label_bytes.decode("punycode")
+ except UnicodeError:
+ raise IDNAError("Invalid A-label")
+ check_label(label)
+ return label
+
+
+def uts46_remap(domain: str, std3_rules: bool = True, transitional: bool = False) -> str:
+ """Re-map the characters in the string according to UTS46 processing."""
+ from .uts46data import uts46data
+
+ output = ""
+
+ for pos, char in enumerate(domain):
+ code_point = ord(char)
+ try:
+ uts46row = uts46data[code_point if code_point < 256 else bisect.bisect_left(uts46data, (code_point, "Z")) - 1]
+ status = uts46row[1]
+ replacement: Optional[str] = None
+ if len(uts46row) == 3:
+ replacement = uts46row[2]
+ if (
+ status == "V"
+ or (status == "D" and not transitional)
+ or (status == "3" and not std3_rules and replacement is None)
+ ):
+ output += char
+ elif replacement is not None and (
+ status == "M" or (status == "3" and not std3_rules) or (status == "D" and transitional)
+ ):
+ output += replacement
+ elif status != "I":
+ raise IndexError()
+ except IndexError:
+ raise InvalidCodepoint(
+ "Codepoint {} not allowed at position {} in {}".format(_unot(code_point), pos + 1, repr(domain))
+ )
+
+ return unicodedata.normalize("NFC", output)
+
+
+def encode(
+ s: Union[str, bytes, bytearray],
+ strict: bool = False,
+ uts46: bool = False,
+ std3_rules: bool = False,
+ transitional: bool = False,
+) -> bytes:
+ if not isinstance(s, str):
+ try:
+ s = str(s, "ascii")
+ except UnicodeDecodeError:
+ raise IDNAError("should pass a unicode string to the function rather than a byte string.")
+ if uts46:
+ s = uts46_remap(s, std3_rules, transitional)
+ trailing_dot = False
+ result = []
+ if strict:
+ labels = s.split(".")
+ else:
+ labels = _unicode_dots_re.split(s)
+ if not labels or labels == [""]:
+ raise IDNAError("Empty domain")
+ if labels[-1] == "":
+ del labels[-1]
+ trailing_dot = True
+ for label in labels:
+ s = alabel(label)
+ if s:
+ result.append(s)
+ else:
+ raise IDNAError("Empty label")
+ if trailing_dot:
+ result.append(b"")
+ s = b".".join(result)
+ if not valid_string_length(s, trailing_dot):
+ raise IDNAError("Domain too long")
+ return s
+
+
+def decode(
+ s: Union[str, bytes, bytearray],
+ strict: bool = False,
+ uts46: bool = False,
+ std3_rules: bool = False,
+) -> str:
+ try:
+ if not isinstance(s, str):
+ s = str(s, "ascii")
+ except UnicodeDecodeError:
+ raise IDNAError("Invalid ASCII in A-label")
+ if uts46:
+ s = uts46_remap(s, std3_rules, False)
+ trailing_dot = False
+ result = []
+ if not strict:
+ labels = _unicode_dots_re.split(s)
+ else:
+ labels = s.split(".")
+ if not labels or labels == [""]:
+ raise IDNAError("Empty domain")
+ if not labels[-1]:
+ del labels[-1]
+ trailing_dot = True
+ for label in labels:
+ s = ulabel(label)
+ if s:
+ result.append(s)
+ else:
+ raise IDNAError("Empty label")
+ if trailing_dot:
+ result.append("")
+ return ".".join(result)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/idnadata.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/idnadata.py
new file mode 100644
index 0000000..4be6004
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/idnadata.py
@@ -0,0 +1,4243 @@
+# This file is automatically generated by tools/idna-data
+
+__version__ = "15.1.0"
+scripts = {
+ "Greek": (
+ 0x37000000374,
+ 0x37500000378,
+ 0x37A0000037E,
+ 0x37F00000380,
+ 0x38400000385,
+ 0x38600000387,
+ 0x3880000038B,
+ 0x38C0000038D,
+ 0x38E000003A2,
+ 0x3A3000003E2,
+ 0x3F000000400,
+ 0x1D2600001D2B,
+ 0x1D5D00001D62,
+ 0x1D6600001D6B,
+ 0x1DBF00001DC0,
+ 0x1F0000001F16,
+ 0x1F1800001F1E,
+ 0x1F2000001F46,
+ 0x1F4800001F4E,
+ 0x1F5000001F58,
+ 0x1F5900001F5A,
+ 0x1F5B00001F5C,
+ 0x1F5D00001F5E,
+ 0x1F5F00001F7E,
+ 0x1F8000001FB5,
+ 0x1FB600001FC5,
+ 0x1FC600001FD4,
+ 0x1FD600001FDC,
+ 0x1FDD00001FF0,
+ 0x1FF200001FF5,
+ 0x1FF600001FFF,
+ 0x212600002127,
+ 0xAB650000AB66,
+ 0x101400001018F,
+ 0x101A0000101A1,
+ 0x1D2000001D246,
+ ),
+ "Han": (
+ 0x2E8000002E9A,
+ 0x2E9B00002EF4,
+ 0x2F0000002FD6,
+ 0x300500003006,
+ 0x300700003008,
+ 0x30210000302A,
+ 0x30380000303C,
+ 0x340000004DC0,
+ 0x4E000000A000,
+ 0xF9000000FA6E,
+ 0xFA700000FADA,
+ 0x16FE200016FE4,
+ 0x16FF000016FF2,
+ 0x200000002A6E0,
+ 0x2A7000002B73A,
+ 0x2B7400002B81E,
+ 0x2B8200002CEA2,
+ 0x2CEB00002EBE1,
+ 0x2EBF00002EE5E,
+ 0x2F8000002FA1E,
+ 0x300000003134B,
+ 0x31350000323B0,
+ ),
+ "Hebrew": (
+ 0x591000005C8,
+ 0x5D0000005EB,
+ 0x5EF000005F5,
+ 0xFB1D0000FB37,
+ 0xFB380000FB3D,
+ 0xFB3E0000FB3F,
+ 0xFB400000FB42,
+ 0xFB430000FB45,
+ 0xFB460000FB50,
+ ),
+ "Hiragana": (
+ 0x304100003097,
+ 0x309D000030A0,
+ 0x1B0010001B120,
+ 0x1B1320001B133,
+ 0x1B1500001B153,
+ 0x1F2000001F201,
+ ),
+ "Katakana": (
+ 0x30A1000030FB,
+ 0x30FD00003100,
+ 0x31F000003200,
+ 0x32D0000032FF,
+ 0x330000003358,
+ 0xFF660000FF70,
+ 0xFF710000FF9E,
+ 0x1AFF00001AFF4,
+ 0x1AFF50001AFFC,
+ 0x1AFFD0001AFFF,
+ 0x1B0000001B001,
+ 0x1B1200001B123,
+ 0x1B1550001B156,
+ 0x1B1640001B168,
+ ),
+}
+joining_types = {
+ 0xAD: 84,
+ 0x300: 84,
+ 0x301: 84,
+ 0x302: 84,
+ 0x303: 84,
+ 0x304: 84,
+ 0x305: 84,
+ 0x306: 84,
+ 0x307: 84,
+ 0x308: 84,
+ 0x309: 84,
+ 0x30A: 84,
+ 0x30B: 84,
+ 0x30C: 84,
+ 0x30D: 84,
+ 0x30E: 84,
+ 0x30F: 84,
+ 0x310: 84,
+ 0x311: 84,
+ 0x312: 84,
+ 0x313: 84,
+ 0x314: 84,
+ 0x315: 84,
+ 0x316: 84,
+ 0x317: 84,
+ 0x318: 84,
+ 0x319: 84,
+ 0x31A: 84,
+ 0x31B: 84,
+ 0x31C: 84,
+ 0x31D: 84,
+ 0x31E: 84,
+ 0x31F: 84,
+ 0x320: 84,
+ 0x321: 84,
+ 0x322: 84,
+ 0x323: 84,
+ 0x324: 84,
+ 0x325: 84,
+ 0x326: 84,
+ 0x327: 84,
+ 0x328: 84,
+ 0x329: 84,
+ 0x32A: 84,
+ 0x32B: 84,
+ 0x32C: 84,
+ 0x32D: 84,
+ 0x32E: 84,
+ 0x32F: 84,
+ 0x330: 84,
+ 0x331: 84,
+ 0x332: 84,
+ 0x333: 84,
+ 0x334: 84,
+ 0x335: 84,
+ 0x336: 84,
+ 0x337: 84,
+ 0x338: 84,
+ 0x339: 84,
+ 0x33A: 84,
+ 0x33B: 84,
+ 0x33C: 84,
+ 0x33D: 84,
+ 0x33E: 84,
+ 0x33F: 84,
+ 0x340: 84,
+ 0x341: 84,
+ 0x342: 84,
+ 0x343: 84,
+ 0x344: 84,
+ 0x345: 84,
+ 0x346: 84,
+ 0x347: 84,
+ 0x348: 84,
+ 0x349: 84,
+ 0x34A: 84,
+ 0x34B: 84,
+ 0x34C: 84,
+ 0x34D: 84,
+ 0x34E: 84,
+ 0x34F: 84,
+ 0x350: 84,
+ 0x351: 84,
+ 0x352: 84,
+ 0x353: 84,
+ 0x354: 84,
+ 0x355: 84,
+ 0x356: 84,
+ 0x357: 84,
+ 0x358: 84,
+ 0x359: 84,
+ 0x35A: 84,
+ 0x35B: 84,
+ 0x35C: 84,
+ 0x35D: 84,
+ 0x35E: 84,
+ 0x35F: 84,
+ 0x360: 84,
+ 0x361: 84,
+ 0x362: 84,
+ 0x363: 84,
+ 0x364: 84,
+ 0x365: 84,
+ 0x366: 84,
+ 0x367: 84,
+ 0x368: 84,
+ 0x369: 84,
+ 0x36A: 84,
+ 0x36B: 84,
+ 0x36C: 84,
+ 0x36D: 84,
+ 0x36E: 84,
+ 0x36F: 84,
+ 0x483: 84,
+ 0x484: 84,
+ 0x485: 84,
+ 0x486: 84,
+ 0x487: 84,
+ 0x488: 84,
+ 0x489: 84,
+ 0x591: 84,
+ 0x592: 84,
+ 0x593: 84,
+ 0x594: 84,
+ 0x595: 84,
+ 0x596: 84,
+ 0x597: 84,
+ 0x598: 84,
+ 0x599: 84,
+ 0x59A: 84,
+ 0x59B: 84,
+ 0x59C: 84,
+ 0x59D: 84,
+ 0x59E: 84,
+ 0x59F: 84,
+ 0x5A0: 84,
+ 0x5A1: 84,
+ 0x5A2: 84,
+ 0x5A3: 84,
+ 0x5A4: 84,
+ 0x5A5: 84,
+ 0x5A6: 84,
+ 0x5A7: 84,
+ 0x5A8: 84,
+ 0x5A9: 84,
+ 0x5AA: 84,
+ 0x5AB: 84,
+ 0x5AC: 84,
+ 0x5AD: 84,
+ 0x5AE: 84,
+ 0x5AF: 84,
+ 0x5B0: 84,
+ 0x5B1: 84,
+ 0x5B2: 84,
+ 0x5B3: 84,
+ 0x5B4: 84,
+ 0x5B5: 84,
+ 0x5B6: 84,
+ 0x5B7: 84,
+ 0x5B8: 84,
+ 0x5B9: 84,
+ 0x5BA: 84,
+ 0x5BB: 84,
+ 0x5BC: 84,
+ 0x5BD: 84,
+ 0x5BF: 84,
+ 0x5C1: 84,
+ 0x5C2: 84,
+ 0x5C4: 84,
+ 0x5C5: 84,
+ 0x5C7: 84,
+ 0x610: 84,
+ 0x611: 84,
+ 0x612: 84,
+ 0x613: 84,
+ 0x614: 84,
+ 0x615: 84,
+ 0x616: 84,
+ 0x617: 84,
+ 0x618: 84,
+ 0x619: 84,
+ 0x61A: 84,
+ 0x61C: 84,
+ 0x620: 68,
+ 0x622: 82,
+ 0x623: 82,
+ 0x624: 82,
+ 0x625: 82,
+ 0x626: 68,
+ 0x627: 82,
+ 0x628: 68,
+ 0x629: 82,
+ 0x62A: 68,
+ 0x62B: 68,
+ 0x62C: 68,
+ 0x62D: 68,
+ 0x62E: 68,
+ 0x62F: 82,
+ 0x630: 82,
+ 0x631: 82,
+ 0x632: 82,
+ 0x633: 68,
+ 0x634: 68,
+ 0x635: 68,
+ 0x636: 68,
+ 0x637: 68,
+ 0x638: 68,
+ 0x639: 68,
+ 0x63A: 68,
+ 0x63B: 68,
+ 0x63C: 68,
+ 0x63D: 68,
+ 0x63E: 68,
+ 0x63F: 68,
+ 0x640: 67,
+ 0x641: 68,
+ 0x642: 68,
+ 0x643: 68,
+ 0x644: 68,
+ 0x645: 68,
+ 0x646: 68,
+ 0x647: 68,
+ 0x648: 82,
+ 0x649: 68,
+ 0x64A: 68,
+ 0x64B: 84,
+ 0x64C: 84,
+ 0x64D: 84,
+ 0x64E: 84,
+ 0x64F: 84,
+ 0x650: 84,
+ 0x651: 84,
+ 0x652: 84,
+ 0x653: 84,
+ 0x654: 84,
+ 0x655: 84,
+ 0x656: 84,
+ 0x657: 84,
+ 0x658: 84,
+ 0x659: 84,
+ 0x65A: 84,
+ 0x65B: 84,
+ 0x65C: 84,
+ 0x65D: 84,
+ 0x65E: 84,
+ 0x65F: 84,
+ 0x66E: 68,
+ 0x66F: 68,
+ 0x670: 84,
+ 0x671: 82,
+ 0x672: 82,
+ 0x673: 82,
+ 0x675: 82,
+ 0x676: 82,
+ 0x677: 82,
+ 0x678: 68,
+ 0x679: 68,
+ 0x67A: 68,
+ 0x67B: 68,
+ 0x67C: 68,
+ 0x67D: 68,
+ 0x67E: 68,
+ 0x67F: 68,
+ 0x680: 68,
+ 0x681: 68,
+ 0x682: 68,
+ 0x683: 68,
+ 0x684: 68,
+ 0x685: 68,
+ 0x686: 68,
+ 0x687: 68,
+ 0x688: 82,
+ 0x689: 82,
+ 0x68A: 82,
+ 0x68B: 82,
+ 0x68C: 82,
+ 0x68D: 82,
+ 0x68E: 82,
+ 0x68F: 82,
+ 0x690: 82,
+ 0x691: 82,
+ 0x692: 82,
+ 0x693: 82,
+ 0x694: 82,
+ 0x695: 82,
+ 0x696: 82,
+ 0x697: 82,
+ 0x698: 82,
+ 0x699: 82,
+ 0x69A: 68,
+ 0x69B: 68,
+ 0x69C: 68,
+ 0x69D: 68,
+ 0x69E: 68,
+ 0x69F: 68,
+ 0x6A0: 68,
+ 0x6A1: 68,
+ 0x6A2: 68,
+ 0x6A3: 68,
+ 0x6A4: 68,
+ 0x6A5: 68,
+ 0x6A6: 68,
+ 0x6A7: 68,
+ 0x6A8: 68,
+ 0x6A9: 68,
+ 0x6AA: 68,
+ 0x6AB: 68,
+ 0x6AC: 68,
+ 0x6AD: 68,
+ 0x6AE: 68,
+ 0x6AF: 68,
+ 0x6B0: 68,
+ 0x6B1: 68,
+ 0x6B2: 68,
+ 0x6B3: 68,
+ 0x6B4: 68,
+ 0x6B5: 68,
+ 0x6B6: 68,
+ 0x6B7: 68,
+ 0x6B8: 68,
+ 0x6B9: 68,
+ 0x6BA: 68,
+ 0x6BB: 68,
+ 0x6BC: 68,
+ 0x6BD: 68,
+ 0x6BE: 68,
+ 0x6BF: 68,
+ 0x6C0: 82,
+ 0x6C1: 68,
+ 0x6C2: 68,
+ 0x6C3: 82,
+ 0x6C4: 82,
+ 0x6C5: 82,
+ 0x6C6: 82,
+ 0x6C7: 82,
+ 0x6C8: 82,
+ 0x6C9: 82,
+ 0x6CA: 82,
+ 0x6CB: 82,
+ 0x6CC: 68,
+ 0x6CD: 82,
+ 0x6CE: 68,
+ 0x6CF: 82,
+ 0x6D0: 68,
+ 0x6D1: 68,
+ 0x6D2: 82,
+ 0x6D3: 82,
+ 0x6D5: 82,
+ 0x6D6: 84,
+ 0x6D7: 84,
+ 0x6D8: 84,
+ 0x6D9: 84,
+ 0x6DA: 84,
+ 0x6DB: 84,
+ 0x6DC: 84,
+ 0x6DF: 84,
+ 0x6E0: 84,
+ 0x6E1: 84,
+ 0x6E2: 84,
+ 0x6E3: 84,
+ 0x6E4: 84,
+ 0x6E7: 84,
+ 0x6E8: 84,
+ 0x6EA: 84,
+ 0x6EB: 84,
+ 0x6EC: 84,
+ 0x6ED: 84,
+ 0x6EE: 82,
+ 0x6EF: 82,
+ 0x6FA: 68,
+ 0x6FB: 68,
+ 0x6FC: 68,
+ 0x6FF: 68,
+ 0x70F: 84,
+ 0x710: 82,
+ 0x711: 84,
+ 0x712: 68,
+ 0x713: 68,
+ 0x714: 68,
+ 0x715: 82,
+ 0x716: 82,
+ 0x717: 82,
+ 0x718: 82,
+ 0x719: 82,
+ 0x71A: 68,
+ 0x71B: 68,
+ 0x71C: 68,
+ 0x71D: 68,
+ 0x71E: 82,
+ 0x71F: 68,
+ 0x720: 68,
+ 0x721: 68,
+ 0x722: 68,
+ 0x723: 68,
+ 0x724: 68,
+ 0x725: 68,
+ 0x726: 68,
+ 0x727: 68,
+ 0x728: 82,
+ 0x729: 68,
+ 0x72A: 82,
+ 0x72B: 68,
+ 0x72C: 82,
+ 0x72D: 68,
+ 0x72E: 68,
+ 0x72F: 82,
+ 0x730: 84,
+ 0x731: 84,
+ 0x732: 84,
+ 0x733: 84,
+ 0x734: 84,
+ 0x735: 84,
+ 0x736: 84,
+ 0x737: 84,
+ 0x738: 84,
+ 0x739: 84,
+ 0x73A: 84,
+ 0x73B: 84,
+ 0x73C: 84,
+ 0x73D: 84,
+ 0x73E: 84,
+ 0x73F: 84,
+ 0x740: 84,
+ 0x741: 84,
+ 0x742: 84,
+ 0x743: 84,
+ 0x744: 84,
+ 0x745: 84,
+ 0x746: 84,
+ 0x747: 84,
+ 0x748: 84,
+ 0x749: 84,
+ 0x74A: 84,
+ 0x74D: 82,
+ 0x74E: 68,
+ 0x74F: 68,
+ 0x750: 68,
+ 0x751: 68,
+ 0x752: 68,
+ 0x753: 68,
+ 0x754: 68,
+ 0x755: 68,
+ 0x756: 68,
+ 0x757: 68,
+ 0x758: 68,
+ 0x759: 82,
+ 0x75A: 82,
+ 0x75B: 82,
+ 0x75C: 68,
+ 0x75D: 68,
+ 0x75E: 68,
+ 0x75F: 68,
+ 0x760: 68,
+ 0x761: 68,
+ 0x762: 68,
+ 0x763: 68,
+ 0x764: 68,
+ 0x765: 68,
+ 0x766: 68,
+ 0x767: 68,
+ 0x768: 68,
+ 0x769: 68,
+ 0x76A: 68,
+ 0x76B: 82,
+ 0x76C: 82,
+ 0x76D: 68,
+ 0x76E: 68,
+ 0x76F: 68,
+ 0x770: 68,
+ 0x771: 82,
+ 0x772: 68,
+ 0x773: 82,
+ 0x774: 82,
+ 0x775: 68,
+ 0x776: 68,
+ 0x777: 68,
+ 0x778: 82,
+ 0x779: 82,
+ 0x77A: 68,
+ 0x77B: 68,
+ 0x77C: 68,
+ 0x77D: 68,
+ 0x77E: 68,
+ 0x77F: 68,
+ 0x7A6: 84,
+ 0x7A7: 84,
+ 0x7A8: 84,
+ 0x7A9: 84,
+ 0x7AA: 84,
+ 0x7AB: 84,
+ 0x7AC: 84,
+ 0x7AD: 84,
+ 0x7AE: 84,
+ 0x7AF: 84,
+ 0x7B0: 84,
+ 0x7CA: 68,
+ 0x7CB: 68,
+ 0x7CC: 68,
+ 0x7CD: 68,
+ 0x7CE: 68,
+ 0x7CF: 68,
+ 0x7D0: 68,
+ 0x7D1: 68,
+ 0x7D2: 68,
+ 0x7D3: 68,
+ 0x7D4: 68,
+ 0x7D5: 68,
+ 0x7D6: 68,
+ 0x7D7: 68,
+ 0x7D8: 68,
+ 0x7D9: 68,
+ 0x7DA: 68,
+ 0x7DB: 68,
+ 0x7DC: 68,
+ 0x7DD: 68,
+ 0x7DE: 68,
+ 0x7DF: 68,
+ 0x7E0: 68,
+ 0x7E1: 68,
+ 0x7E2: 68,
+ 0x7E3: 68,
+ 0x7E4: 68,
+ 0x7E5: 68,
+ 0x7E6: 68,
+ 0x7E7: 68,
+ 0x7E8: 68,
+ 0x7E9: 68,
+ 0x7EA: 68,
+ 0x7EB: 84,
+ 0x7EC: 84,
+ 0x7ED: 84,
+ 0x7EE: 84,
+ 0x7EF: 84,
+ 0x7F0: 84,
+ 0x7F1: 84,
+ 0x7F2: 84,
+ 0x7F3: 84,
+ 0x7FA: 67,
+ 0x7FD: 84,
+ 0x816: 84,
+ 0x817: 84,
+ 0x818: 84,
+ 0x819: 84,
+ 0x81B: 84,
+ 0x81C: 84,
+ 0x81D: 84,
+ 0x81E: 84,
+ 0x81F: 84,
+ 0x820: 84,
+ 0x821: 84,
+ 0x822: 84,
+ 0x823: 84,
+ 0x825: 84,
+ 0x826: 84,
+ 0x827: 84,
+ 0x829: 84,
+ 0x82A: 84,
+ 0x82B: 84,
+ 0x82C: 84,
+ 0x82D: 84,
+ 0x840: 82,
+ 0x841: 68,
+ 0x842: 68,
+ 0x843: 68,
+ 0x844: 68,
+ 0x845: 68,
+ 0x846: 82,
+ 0x847: 82,
+ 0x848: 68,
+ 0x849: 82,
+ 0x84A: 68,
+ 0x84B: 68,
+ 0x84C: 68,
+ 0x84D: 68,
+ 0x84E: 68,
+ 0x84F: 68,
+ 0x850: 68,
+ 0x851: 68,
+ 0x852: 68,
+ 0x853: 68,
+ 0x854: 82,
+ 0x855: 68,
+ 0x856: 82,
+ 0x857: 82,
+ 0x858: 82,
+ 0x859: 84,
+ 0x85A: 84,
+ 0x85B: 84,
+ 0x860: 68,
+ 0x862: 68,
+ 0x863: 68,
+ 0x864: 68,
+ 0x865: 68,
+ 0x867: 82,
+ 0x868: 68,
+ 0x869: 82,
+ 0x86A: 82,
+ 0x870: 82,
+ 0x871: 82,
+ 0x872: 82,
+ 0x873: 82,
+ 0x874: 82,
+ 0x875: 82,
+ 0x876: 82,
+ 0x877: 82,
+ 0x878: 82,
+ 0x879: 82,
+ 0x87A: 82,
+ 0x87B: 82,
+ 0x87C: 82,
+ 0x87D: 82,
+ 0x87E: 82,
+ 0x87F: 82,
+ 0x880: 82,
+ 0x881: 82,
+ 0x882: 82,
+ 0x883: 67,
+ 0x884: 67,
+ 0x885: 67,
+ 0x886: 68,
+ 0x889: 68,
+ 0x88A: 68,
+ 0x88B: 68,
+ 0x88C: 68,
+ 0x88D: 68,
+ 0x88E: 82,
+ 0x898: 84,
+ 0x899: 84,
+ 0x89A: 84,
+ 0x89B: 84,
+ 0x89C: 84,
+ 0x89D: 84,
+ 0x89E: 84,
+ 0x89F: 84,
+ 0x8A0: 68,
+ 0x8A1: 68,
+ 0x8A2: 68,
+ 0x8A3: 68,
+ 0x8A4: 68,
+ 0x8A5: 68,
+ 0x8A6: 68,
+ 0x8A7: 68,
+ 0x8A8: 68,
+ 0x8A9: 68,
+ 0x8AA: 82,
+ 0x8AB: 82,
+ 0x8AC: 82,
+ 0x8AE: 82,
+ 0x8AF: 68,
+ 0x8B0: 68,
+ 0x8B1: 82,
+ 0x8B2: 82,
+ 0x8B3: 68,
+ 0x8B4: 68,
+ 0x8B5: 68,
+ 0x8B6: 68,
+ 0x8B7: 68,
+ 0x8B8: 68,
+ 0x8B9: 82,
+ 0x8BA: 68,
+ 0x8BB: 68,
+ 0x8BC: 68,
+ 0x8BD: 68,
+ 0x8BE: 68,
+ 0x8BF: 68,
+ 0x8C0: 68,
+ 0x8C1: 68,
+ 0x8C2: 68,
+ 0x8C3: 68,
+ 0x8C4: 68,
+ 0x8C5: 68,
+ 0x8C6: 68,
+ 0x8C7: 68,
+ 0x8C8: 68,
+ 0x8CA: 84,
+ 0x8CB: 84,
+ 0x8CC: 84,
+ 0x8CD: 84,
+ 0x8CE: 84,
+ 0x8CF: 84,
+ 0x8D0: 84,
+ 0x8D1: 84,
+ 0x8D2: 84,
+ 0x8D3: 84,
+ 0x8D4: 84,
+ 0x8D5: 84,
+ 0x8D6: 84,
+ 0x8D7: 84,
+ 0x8D8: 84,
+ 0x8D9: 84,
+ 0x8DA: 84,
+ 0x8DB: 84,
+ 0x8DC: 84,
+ 0x8DD: 84,
+ 0x8DE: 84,
+ 0x8DF: 84,
+ 0x8E0: 84,
+ 0x8E1: 84,
+ 0x8E3: 84,
+ 0x8E4: 84,
+ 0x8E5: 84,
+ 0x8E6: 84,
+ 0x8E7: 84,
+ 0x8E8: 84,
+ 0x8E9: 84,
+ 0x8EA: 84,
+ 0x8EB: 84,
+ 0x8EC: 84,
+ 0x8ED: 84,
+ 0x8EE: 84,
+ 0x8EF: 84,
+ 0x8F0: 84,
+ 0x8F1: 84,
+ 0x8F2: 84,
+ 0x8F3: 84,
+ 0x8F4: 84,
+ 0x8F5: 84,
+ 0x8F6: 84,
+ 0x8F7: 84,
+ 0x8F8: 84,
+ 0x8F9: 84,
+ 0x8FA: 84,
+ 0x8FB: 84,
+ 0x8FC: 84,
+ 0x8FD: 84,
+ 0x8FE: 84,
+ 0x8FF: 84,
+ 0x900: 84,
+ 0x901: 84,
+ 0x902: 84,
+ 0x93A: 84,
+ 0x93C: 84,
+ 0x941: 84,
+ 0x942: 84,
+ 0x943: 84,
+ 0x944: 84,
+ 0x945: 84,
+ 0x946: 84,
+ 0x947: 84,
+ 0x948: 84,
+ 0x94D: 84,
+ 0x951: 84,
+ 0x952: 84,
+ 0x953: 84,
+ 0x954: 84,
+ 0x955: 84,
+ 0x956: 84,
+ 0x957: 84,
+ 0x962: 84,
+ 0x963: 84,
+ 0x981: 84,
+ 0x9BC: 84,
+ 0x9C1: 84,
+ 0x9C2: 84,
+ 0x9C3: 84,
+ 0x9C4: 84,
+ 0x9CD: 84,
+ 0x9E2: 84,
+ 0x9E3: 84,
+ 0x9FE: 84,
+ 0xA01: 84,
+ 0xA02: 84,
+ 0xA3C: 84,
+ 0xA41: 84,
+ 0xA42: 84,
+ 0xA47: 84,
+ 0xA48: 84,
+ 0xA4B: 84,
+ 0xA4C: 84,
+ 0xA4D: 84,
+ 0xA51: 84,
+ 0xA70: 84,
+ 0xA71: 84,
+ 0xA75: 84,
+ 0xA81: 84,
+ 0xA82: 84,
+ 0xABC: 84,
+ 0xAC1: 84,
+ 0xAC2: 84,
+ 0xAC3: 84,
+ 0xAC4: 84,
+ 0xAC5: 84,
+ 0xAC7: 84,
+ 0xAC8: 84,
+ 0xACD: 84,
+ 0xAE2: 84,
+ 0xAE3: 84,
+ 0xAFA: 84,
+ 0xAFB: 84,
+ 0xAFC: 84,
+ 0xAFD: 84,
+ 0xAFE: 84,
+ 0xAFF: 84,
+ 0xB01: 84,
+ 0xB3C: 84,
+ 0xB3F: 84,
+ 0xB41: 84,
+ 0xB42: 84,
+ 0xB43: 84,
+ 0xB44: 84,
+ 0xB4D: 84,
+ 0xB55: 84,
+ 0xB56: 84,
+ 0xB62: 84,
+ 0xB63: 84,
+ 0xB82: 84,
+ 0xBC0: 84,
+ 0xBCD: 84,
+ 0xC00: 84,
+ 0xC04: 84,
+ 0xC3C: 84,
+ 0xC3E: 84,
+ 0xC3F: 84,
+ 0xC40: 84,
+ 0xC46: 84,
+ 0xC47: 84,
+ 0xC48: 84,
+ 0xC4A: 84,
+ 0xC4B: 84,
+ 0xC4C: 84,
+ 0xC4D: 84,
+ 0xC55: 84,
+ 0xC56: 84,
+ 0xC62: 84,
+ 0xC63: 84,
+ 0xC81: 84,
+ 0xCBC: 84,
+ 0xCBF: 84,
+ 0xCC6: 84,
+ 0xCCC: 84,
+ 0xCCD: 84,
+ 0xCE2: 84,
+ 0xCE3: 84,
+ 0xD00: 84,
+ 0xD01: 84,
+ 0xD3B: 84,
+ 0xD3C: 84,
+ 0xD41: 84,
+ 0xD42: 84,
+ 0xD43: 84,
+ 0xD44: 84,
+ 0xD4D: 84,
+ 0xD62: 84,
+ 0xD63: 84,
+ 0xD81: 84,
+ 0xDCA: 84,
+ 0xDD2: 84,
+ 0xDD3: 84,
+ 0xDD4: 84,
+ 0xDD6: 84,
+ 0xE31: 84,
+ 0xE34: 84,
+ 0xE35: 84,
+ 0xE36: 84,
+ 0xE37: 84,
+ 0xE38: 84,
+ 0xE39: 84,
+ 0xE3A: 84,
+ 0xE47: 84,
+ 0xE48: 84,
+ 0xE49: 84,
+ 0xE4A: 84,
+ 0xE4B: 84,
+ 0xE4C: 84,
+ 0xE4D: 84,
+ 0xE4E: 84,
+ 0xEB1: 84,
+ 0xEB4: 84,
+ 0xEB5: 84,
+ 0xEB6: 84,
+ 0xEB7: 84,
+ 0xEB8: 84,
+ 0xEB9: 84,
+ 0xEBA: 84,
+ 0xEBB: 84,
+ 0xEBC: 84,
+ 0xEC8: 84,
+ 0xEC9: 84,
+ 0xECA: 84,
+ 0xECB: 84,
+ 0xECC: 84,
+ 0xECD: 84,
+ 0xECE: 84,
+ 0xF18: 84,
+ 0xF19: 84,
+ 0xF35: 84,
+ 0xF37: 84,
+ 0xF39: 84,
+ 0xF71: 84,
+ 0xF72: 84,
+ 0xF73: 84,
+ 0xF74: 84,
+ 0xF75: 84,
+ 0xF76: 84,
+ 0xF77: 84,
+ 0xF78: 84,
+ 0xF79: 84,
+ 0xF7A: 84,
+ 0xF7B: 84,
+ 0xF7C: 84,
+ 0xF7D: 84,
+ 0xF7E: 84,
+ 0xF80: 84,
+ 0xF81: 84,
+ 0xF82: 84,
+ 0xF83: 84,
+ 0xF84: 84,
+ 0xF86: 84,
+ 0xF87: 84,
+ 0xF8D: 84,
+ 0xF8E: 84,
+ 0xF8F: 84,
+ 0xF90: 84,
+ 0xF91: 84,
+ 0xF92: 84,
+ 0xF93: 84,
+ 0xF94: 84,
+ 0xF95: 84,
+ 0xF96: 84,
+ 0xF97: 84,
+ 0xF99: 84,
+ 0xF9A: 84,
+ 0xF9B: 84,
+ 0xF9C: 84,
+ 0xF9D: 84,
+ 0xF9E: 84,
+ 0xF9F: 84,
+ 0xFA0: 84,
+ 0xFA1: 84,
+ 0xFA2: 84,
+ 0xFA3: 84,
+ 0xFA4: 84,
+ 0xFA5: 84,
+ 0xFA6: 84,
+ 0xFA7: 84,
+ 0xFA8: 84,
+ 0xFA9: 84,
+ 0xFAA: 84,
+ 0xFAB: 84,
+ 0xFAC: 84,
+ 0xFAD: 84,
+ 0xFAE: 84,
+ 0xFAF: 84,
+ 0xFB0: 84,
+ 0xFB1: 84,
+ 0xFB2: 84,
+ 0xFB3: 84,
+ 0xFB4: 84,
+ 0xFB5: 84,
+ 0xFB6: 84,
+ 0xFB7: 84,
+ 0xFB8: 84,
+ 0xFB9: 84,
+ 0xFBA: 84,
+ 0xFBB: 84,
+ 0xFBC: 84,
+ 0xFC6: 84,
+ 0x102D: 84,
+ 0x102E: 84,
+ 0x102F: 84,
+ 0x1030: 84,
+ 0x1032: 84,
+ 0x1033: 84,
+ 0x1034: 84,
+ 0x1035: 84,
+ 0x1036: 84,
+ 0x1037: 84,
+ 0x1039: 84,
+ 0x103A: 84,
+ 0x103D: 84,
+ 0x103E: 84,
+ 0x1058: 84,
+ 0x1059: 84,
+ 0x105E: 84,
+ 0x105F: 84,
+ 0x1060: 84,
+ 0x1071: 84,
+ 0x1072: 84,
+ 0x1073: 84,
+ 0x1074: 84,
+ 0x1082: 84,
+ 0x1085: 84,
+ 0x1086: 84,
+ 0x108D: 84,
+ 0x109D: 84,
+ 0x135D: 84,
+ 0x135E: 84,
+ 0x135F: 84,
+ 0x1712: 84,
+ 0x1713: 84,
+ 0x1714: 84,
+ 0x1732: 84,
+ 0x1733: 84,
+ 0x1752: 84,
+ 0x1753: 84,
+ 0x1772: 84,
+ 0x1773: 84,
+ 0x17B4: 84,
+ 0x17B5: 84,
+ 0x17B7: 84,
+ 0x17B8: 84,
+ 0x17B9: 84,
+ 0x17BA: 84,
+ 0x17BB: 84,
+ 0x17BC: 84,
+ 0x17BD: 84,
+ 0x17C6: 84,
+ 0x17C9: 84,
+ 0x17CA: 84,
+ 0x17CB: 84,
+ 0x17CC: 84,
+ 0x17CD: 84,
+ 0x17CE: 84,
+ 0x17CF: 84,
+ 0x17D0: 84,
+ 0x17D1: 84,
+ 0x17D2: 84,
+ 0x17D3: 84,
+ 0x17DD: 84,
+ 0x1807: 68,
+ 0x180A: 67,
+ 0x180B: 84,
+ 0x180C: 84,
+ 0x180D: 84,
+ 0x180F: 84,
+ 0x1820: 68,
+ 0x1821: 68,
+ 0x1822: 68,
+ 0x1823: 68,
+ 0x1824: 68,
+ 0x1825: 68,
+ 0x1826: 68,
+ 0x1827: 68,
+ 0x1828: 68,
+ 0x1829: 68,
+ 0x182A: 68,
+ 0x182B: 68,
+ 0x182C: 68,
+ 0x182D: 68,
+ 0x182E: 68,
+ 0x182F: 68,
+ 0x1830: 68,
+ 0x1831: 68,
+ 0x1832: 68,
+ 0x1833: 68,
+ 0x1834: 68,
+ 0x1835: 68,
+ 0x1836: 68,
+ 0x1837: 68,
+ 0x1838: 68,
+ 0x1839: 68,
+ 0x183A: 68,
+ 0x183B: 68,
+ 0x183C: 68,
+ 0x183D: 68,
+ 0x183E: 68,
+ 0x183F: 68,
+ 0x1840: 68,
+ 0x1841: 68,
+ 0x1842: 68,
+ 0x1843: 68,
+ 0x1844: 68,
+ 0x1845: 68,
+ 0x1846: 68,
+ 0x1847: 68,
+ 0x1848: 68,
+ 0x1849: 68,
+ 0x184A: 68,
+ 0x184B: 68,
+ 0x184C: 68,
+ 0x184D: 68,
+ 0x184E: 68,
+ 0x184F: 68,
+ 0x1850: 68,
+ 0x1851: 68,
+ 0x1852: 68,
+ 0x1853: 68,
+ 0x1854: 68,
+ 0x1855: 68,
+ 0x1856: 68,
+ 0x1857: 68,
+ 0x1858: 68,
+ 0x1859: 68,
+ 0x185A: 68,
+ 0x185B: 68,
+ 0x185C: 68,
+ 0x185D: 68,
+ 0x185E: 68,
+ 0x185F: 68,
+ 0x1860: 68,
+ 0x1861: 68,
+ 0x1862: 68,
+ 0x1863: 68,
+ 0x1864: 68,
+ 0x1865: 68,
+ 0x1866: 68,
+ 0x1867: 68,
+ 0x1868: 68,
+ 0x1869: 68,
+ 0x186A: 68,
+ 0x186B: 68,
+ 0x186C: 68,
+ 0x186D: 68,
+ 0x186E: 68,
+ 0x186F: 68,
+ 0x1870: 68,
+ 0x1871: 68,
+ 0x1872: 68,
+ 0x1873: 68,
+ 0x1874: 68,
+ 0x1875: 68,
+ 0x1876: 68,
+ 0x1877: 68,
+ 0x1878: 68,
+ 0x1885: 84,
+ 0x1886: 84,
+ 0x1887: 68,
+ 0x1888: 68,
+ 0x1889: 68,
+ 0x188A: 68,
+ 0x188B: 68,
+ 0x188C: 68,
+ 0x188D: 68,
+ 0x188E: 68,
+ 0x188F: 68,
+ 0x1890: 68,
+ 0x1891: 68,
+ 0x1892: 68,
+ 0x1893: 68,
+ 0x1894: 68,
+ 0x1895: 68,
+ 0x1896: 68,
+ 0x1897: 68,
+ 0x1898: 68,
+ 0x1899: 68,
+ 0x189A: 68,
+ 0x189B: 68,
+ 0x189C: 68,
+ 0x189D: 68,
+ 0x189E: 68,
+ 0x189F: 68,
+ 0x18A0: 68,
+ 0x18A1: 68,
+ 0x18A2: 68,
+ 0x18A3: 68,
+ 0x18A4: 68,
+ 0x18A5: 68,
+ 0x18A6: 68,
+ 0x18A7: 68,
+ 0x18A8: 68,
+ 0x18A9: 84,
+ 0x18AA: 68,
+ 0x1920: 84,
+ 0x1921: 84,
+ 0x1922: 84,
+ 0x1927: 84,
+ 0x1928: 84,
+ 0x1932: 84,
+ 0x1939: 84,
+ 0x193A: 84,
+ 0x193B: 84,
+ 0x1A17: 84,
+ 0x1A18: 84,
+ 0x1A1B: 84,
+ 0x1A56: 84,
+ 0x1A58: 84,
+ 0x1A59: 84,
+ 0x1A5A: 84,
+ 0x1A5B: 84,
+ 0x1A5C: 84,
+ 0x1A5D: 84,
+ 0x1A5E: 84,
+ 0x1A60: 84,
+ 0x1A62: 84,
+ 0x1A65: 84,
+ 0x1A66: 84,
+ 0x1A67: 84,
+ 0x1A68: 84,
+ 0x1A69: 84,
+ 0x1A6A: 84,
+ 0x1A6B: 84,
+ 0x1A6C: 84,
+ 0x1A73: 84,
+ 0x1A74: 84,
+ 0x1A75: 84,
+ 0x1A76: 84,
+ 0x1A77: 84,
+ 0x1A78: 84,
+ 0x1A79: 84,
+ 0x1A7A: 84,
+ 0x1A7B: 84,
+ 0x1A7C: 84,
+ 0x1A7F: 84,
+ 0x1AB0: 84,
+ 0x1AB1: 84,
+ 0x1AB2: 84,
+ 0x1AB3: 84,
+ 0x1AB4: 84,
+ 0x1AB5: 84,
+ 0x1AB6: 84,
+ 0x1AB7: 84,
+ 0x1AB8: 84,
+ 0x1AB9: 84,
+ 0x1ABA: 84,
+ 0x1ABB: 84,
+ 0x1ABC: 84,
+ 0x1ABD: 84,
+ 0x1ABE: 84,
+ 0x1ABF: 84,
+ 0x1AC0: 84,
+ 0x1AC1: 84,
+ 0x1AC2: 84,
+ 0x1AC3: 84,
+ 0x1AC4: 84,
+ 0x1AC5: 84,
+ 0x1AC6: 84,
+ 0x1AC7: 84,
+ 0x1AC8: 84,
+ 0x1AC9: 84,
+ 0x1ACA: 84,
+ 0x1ACB: 84,
+ 0x1ACC: 84,
+ 0x1ACD: 84,
+ 0x1ACE: 84,
+ 0x1B00: 84,
+ 0x1B01: 84,
+ 0x1B02: 84,
+ 0x1B03: 84,
+ 0x1B34: 84,
+ 0x1B36: 84,
+ 0x1B37: 84,
+ 0x1B38: 84,
+ 0x1B39: 84,
+ 0x1B3A: 84,
+ 0x1B3C: 84,
+ 0x1B42: 84,
+ 0x1B6B: 84,
+ 0x1B6C: 84,
+ 0x1B6D: 84,
+ 0x1B6E: 84,
+ 0x1B6F: 84,
+ 0x1B70: 84,
+ 0x1B71: 84,
+ 0x1B72: 84,
+ 0x1B73: 84,
+ 0x1B80: 84,
+ 0x1B81: 84,
+ 0x1BA2: 84,
+ 0x1BA3: 84,
+ 0x1BA4: 84,
+ 0x1BA5: 84,
+ 0x1BA8: 84,
+ 0x1BA9: 84,
+ 0x1BAB: 84,
+ 0x1BAC: 84,
+ 0x1BAD: 84,
+ 0x1BE6: 84,
+ 0x1BE8: 84,
+ 0x1BE9: 84,
+ 0x1BED: 84,
+ 0x1BEF: 84,
+ 0x1BF0: 84,
+ 0x1BF1: 84,
+ 0x1C2C: 84,
+ 0x1C2D: 84,
+ 0x1C2E: 84,
+ 0x1C2F: 84,
+ 0x1C30: 84,
+ 0x1C31: 84,
+ 0x1C32: 84,
+ 0x1C33: 84,
+ 0x1C36: 84,
+ 0x1C37: 84,
+ 0x1CD0: 84,
+ 0x1CD1: 84,
+ 0x1CD2: 84,
+ 0x1CD4: 84,
+ 0x1CD5: 84,
+ 0x1CD6: 84,
+ 0x1CD7: 84,
+ 0x1CD8: 84,
+ 0x1CD9: 84,
+ 0x1CDA: 84,
+ 0x1CDB: 84,
+ 0x1CDC: 84,
+ 0x1CDD: 84,
+ 0x1CDE: 84,
+ 0x1CDF: 84,
+ 0x1CE0: 84,
+ 0x1CE2: 84,
+ 0x1CE3: 84,
+ 0x1CE4: 84,
+ 0x1CE5: 84,
+ 0x1CE6: 84,
+ 0x1CE7: 84,
+ 0x1CE8: 84,
+ 0x1CED: 84,
+ 0x1CF4: 84,
+ 0x1CF8: 84,
+ 0x1CF9: 84,
+ 0x1DC0: 84,
+ 0x1DC1: 84,
+ 0x1DC2: 84,
+ 0x1DC3: 84,
+ 0x1DC4: 84,
+ 0x1DC5: 84,
+ 0x1DC6: 84,
+ 0x1DC7: 84,
+ 0x1DC8: 84,
+ 0x1DC9: 84,
+ 0x1DCA: 84,
+ 0x1DCB: 84,
+ 0x1DCC: 84,
+ 0x1DCD: 84,
+ 0x1DCE: 84,
+ 0x1DCF: 84,
+ 0x1DD0: 84,
+ 0x1DD1: 84,
+ 0x1DD2: 84,
+ 0x1DD3: 84,
+ 0x1DD4: 84,
+ 0x1DD5: 84,
+ 0x1DD6: 84,
+ 0x1DD7: 84,
+ 0x1DD8: 84,
+ 0x1DD9: 84,
+ 0x1DDA: 84,
+ 0x1DDB: 84,
+ 0x1DDC: 84,
+ 0x1DDD: 84,
+ 0x1DDE: 84,
+ 0x1DDF: 84,
+ 0x1DE0: 84,
+ 0x1DE1: 84,
+ 0x1DE2: 84,
+ 0x1DE3: 84,
+ 0x1DE4: 84,
+ 0x1DE5: 84,
+ 0x1DE6: 84,
+ 0x1DE7: 84,
+ 0x1DE8: 84,
+ 0x1DE9: 84,
+ 0x1DEA: 84,
+ 0x1DEB: 84,
+ 0x1DEC: 84,
+ 0x1DED: 84,
+ 0x1DEE: 84,
+ 0x1DEF: 84,
+ 0x1DF0: 84,
+ 0x1DF1: 84,
+ 0x1DF2: 84,
+ 0x1DF3: 84,
+ 0x1DF4: 84,
+ 0x1DF5: 84,
+ 0x1DF6: 84,
+ 0x1DF7: 84,
+ 0x1DF8: 84,
+ 0x1DF9: 84,
+ 0x1DFA: 84,
+ 0x1DFB: 84,
+ 0x1DFC: 84,
+ 0x1DFD: 84,
+ 0x1DFE: 84,
+ 0x1DFF: 84,
+ 0x200B: 84,
+ 0x200D: 67,
+ 0x200E: 84,
+ 0x200F: 84,
+ 0x202A: 84,
+ 0x202B: 84,
+ 0x202C: 84,
+ 0x202D: 84,
+ 0x202E: 84,
+ 0x2060: 84,
+ 0x2061: 84,
+ 0x2062: 84,
+ 0x2063: 84,
+ 0x2064: 84,
+ 0x206A: 84,
+ 0x206B: 84,
+ 0x206C: 84,
+ 0x206D: 84,
+ 0x206E: 84,
+ 0x206F: 84,
+ 0x20D0: 84,
+ 0x20D1: 84,
+ 0x20D2: 84,
+ 0x20D3: 84,
+ 0x20D4: 84,
+ 0x20D5: 84,
+ 0x20D6: 84,
+ 0x20D7: 84,
+ 0x20D8: 84,
+ 0x20D9: 84,
+ 0x20DA: 84,
+ 0x20DB: 84,
+ 0x20DC: 84,
+ 0x20DD: 84,
+ 0x20DE: 84,
+ 0x20DF: 84,
+ 0x20E0: 84,
+ 0x20E1: 84,
+ 0x20E2: 84,
+ 0x20E3: 84,
+ 0x20E4: 84,
+ 0x20E5: 84,
+ 0x20E6: 84,
+ 0x20E7: 84,
+ 0x20E8: 84,
+ 0x20E9: 84,
+ 0x20EA: 84,
+ 0x20EB: 84,
+ 0x20EC: 84,
+ 0x20ED: 84,
+ 0x20EE: 84,
+ 0x20EF: 84,
+ 0x20F0: 84,
+ 0x2CEF: 84,
+ 0x2CF0: 84,
+ 0x2CF1: 84,
+ 0x2D7F: 84,
+ 0x2DE0: 84,
+ 0x2DE1: 84,
+ 0x2DE2: 84,
+ 0x2DE3: 84,
+ 0x2DE4: 84,
+ 0x2DE5: 84,
+ 0x2DE6: 84,
+ 0x2DE7: 84,
+ 0x2DE8: 84,
+ 0x2DE9: 84,
+ 0x2DEA: 84,
+ 0x2DEB: 84,
+ 0x2DEC: 84,
+ 0x2DED: 84,
+ 0x2DEE: 84,
+ 0x2DEF: 84,
+ 0x2DF0: 84,
+ 0x2DF1: 84,
+ 0x2DF2: 84,
+ 0x2DF3: 84,
+ 0x2DF4: 84,
+ 0x2DF5: 84,
+ 0x2DF6: 84,
+ 0x2DF7: 84,
+ 0x2DF8: 84,
+ 0x2DF9: 84,
+ 0x2DFA: 84,
+ 0x2DFB: 84,
+ 0x2DFC: 84,
+ 0x2DFD: 84,
+ 0x2DFE: 84,
+ 0x2DFF: 84,
+ 0x302A: 84,
+ 0x302B: 84,
+ 0x302C: 84,
+ 0x302D: 84,
+ 0x3099: 84,
+ 0x309A: 84,
+ 0xA66F: 84,
+ 0xA670: 84,
+ 0xA671: 84,
+ 0xA672: 84,
+ 0xA674: 84,
+ 0xA675: 84,
+ 0xA676: 84,
+ 0xA677: 84,
+ 0xA678: 84,
+ 0xA679: 84,
+ 0xA67A: 84,
+ 0xA67B: 84,
+ 0xA67C: 84,
+ 0xA67D: 84,
+ 0xA69E: 84,
+ 0xA69F: 84,
+ 0xA6F0: 84,
+ 0xA6F1: 84,
+ 0xA802: 84,
+ 0xA806: 84,
+ 0xA80B: 84,
+ 0xA825: 84,
+ 0xA826: 84,
+ 0xA82C: 84,
+ 0xA840: 68,
+ 0xA841: 68,
+ 0xA842: 68,
+ 0xA843: 68,
+ 0xA844: 68,
+ 0xA845: 68,
+ 0xA846: 68,
+ 0xA847: 68,
+ 0xA848: 68,
+ 0xA849: 68,
+ 0xA84A: 68,
+ 0xA84B: 68,
+ 0xA84C: 68,
+ 0xA84D: 68,
+ 0xA84E: 68,
+ 0xA84F: 68,
+ 0xA850: 68,
+ 0xA851: 68,
+ 0xA852: 68,
+ 0xA853: 68,
+ 0xA854: 68,
+ 0xA855: 68,
+ 0xA856: 68,
+ 0xA857: 68,
+ 0xA858: 68,
+ 0xA859: 68,
+ 0xA85A: 68,
+ 0xA85B: 68,
+ 0xA85C: 68,
+ 0xA85D: 68,
+ 0xA85E: 68,
+ 0xA85F: 68,
+ 0xA860: 68,
+ 0xA861: 68,
+ 0xA862: 68,
+ 0xA863: 68,
+ 0xA864: 68,
+ 0xA865: 68,
+ 0xA866: 68,
+ 0xA867: 68,
+ 0xA868: 68,
+ 0xA869: 68,
+ 0xA86A: 68,
+ 0xA86B: 68,
+ 0xA86C: 68,
+ 0xA86D: 68,
+ 0xA86E: 68,
+ 0xA86F: 68,
+ 0xA870: 68,
+ 0xA871: 68,
+ 0xA872: 76,
+ 0xA8C4: 84,
+ 0xA8C5: 84,
+ 0xA8E0: 84,
+ 0xA8E1: 84,
+ 0xA8E2: 84,
+ 0xA8E3: 84,
+ 0xA8E4: 84,
+ 0xA8E5: 84,
+ 0xA8E6: 84,
+ 0xA8E7: 84,
+ 0xA8E8: 84,
+ 0xA8E9: 84,
+ 0xA8EA: 84,
+ 0xA8EB: 84,
+ 0xA8EC: 84,
+ 0xA8ED: 84,
+ 0xA8EE: 84,
+ 0xA8EF: 84,
+ 0xA8F0: 84,
+ 0xA8F1: 84,
+ 0xA8FF: 84,
+ 0xA926: 84,
+ 0xA927: 84,
+ 0xA928: 84,
+ 0xA929: 84,
+ 0xA92A: 84,
+ 0xA92B: 84,
+ 0xA92C: 84,
+ 0xA92D: 84,
+ 0xA947: 84,
+ 0xA948: 84,
+ 0xA949: 84,
+ 0xA94A: 84,
+ 0xA94B: 84,
+ 0xA94C: 84,
+ 0xA94D: 84,
+ 0xA94E: 84,
+ 0xA94F: 84,
+ 0xA950: 84,
+ 0xA951: 84,
+ 0xA980: 84,
+ 0xA981: 84,
+ 0xA982: 84,
+ 0xA9B3: 84,
+ 0xA9B6: 84,
+ 0xA9B7: 84,
+ 0xA9B8: 84,
+ 0xA9B9: 84,
+ 0xA9BC: 84,
+ 0xA9BD: 84,
+ 0xA9E5: 84,
+ 0xAA29: 84,
+ 0xAA2A: 84,
+ 0xAA2B: 84,
+ 0xAA2C: 84,
+ 0xAA2D: 84,
+ 0xAA2E: 84,
+ 0xAA31: 84,
+ 0xAA32: 84,
+ 0xAA35: 84,
+ 0xAA36: 84,
+ 0xAA43: 84,
+ 0xAA4C: 84,
+ 0xAA7C: 84,
+ 0xAAB0: 84,
+ 0xAAB2: 84,
+ 0xAAB3: 84,
+ 0xAAB4: 84,
+ 0xAAB7: 84,
+ 0xAAB8: 84,
+ 0xAABE: 84,
+ 0xAABF: 84,
+ 0xAAC1: 84,
+ 0xAAEC: 84,
+ 0xAAED: 84,
+ 0xAAF6: 84,
+ 0xABE5: 84,
+ 0xABE8: 84,
+ 0xABED: 84,
+ 0xFB1E: 84,
+ 0xFE00: 84,
+ 0xFE01: 84,
+ 0xFE02: 84,
+ 0xFE03: 84,
+ 0xFE04: 84,
+ 0xFE05: 84,
+ 0xFE06: 84,
+ 0xFE07: 84,
+ 0xFE08: 84,
+ 0xFE09: 84,
+ 0xFE0A: 84,
+ 0xFE0B: 84,
+ 0xFE0C: 84,
+ 0xFE0D: 84,
+ 0xFE0E: 84,
+ 0xFE0F: 84,
+ 0xFE20: 84,
+ 0xFE21: 84,
+ 0xFE22: 84,
+ 0xFE23: 84,
+ 0xFE24: 84,
+ 0xFE25: 84,
+ 0xFE26: 84,
+ 0xFE27: 84,
+ 0xFE28: 84,
+ 0xFE29: 84,
+ 0xFE2A: 84,
+ 0xFE2B: 84,
+ 0xFE2C: 84,
+ 0xFE2D: 84,
+ 0xFE2E: 84,
+ 0xFE2F: 84,
+ 0xFEFF: 84,
+ 0xFFF9: 84,
+ 0xFFFA: 84,
+ 0xFFFB: 84,
+ 0x101FD: 84,
+ 0x102E0: 84,
+ 0x10376: 84,
+ 0x10377: 84,
+ 0x10378: 84,
+ 0x10379: 84,
+ 0x1037A: 84,
+ 0x10A01: 84,
+ 0x10A02: 84,
+ 0x10A03: 84,
+ 0x10A05: 84,
+ 0x10A06: 84,
+ 0x10A0C: 84,
+ 0x10A0D: 84,
+ 0x10A0E: 84,
+ 0x10A0F: 84,
+ 0x10A38: 84,
+ 0x10A39: 84,
+ 0x10A3A: 84,
+ 0x10A3F: 84,
+ 0x10AC0: 68,
+ 0x10AC1: 68,
+ 0x10AC2: 68,
+ 0x10AC3: 68,
+ 0x10AC4: 68,
+ 0x10AC5: 82,
+ 0x10AC7: 82,
+ 0x10AC9: 82,
+ 0x10ACA: 82,
+ 0x10ACD: 76,
+ 0x10ACE: 82,
+ 0x10ACF: 82,
+ 0x10AD0: 82,
+ 0x10AD1: 82,
+ 0x10AD2: 82,
+ 0x10AD3: 68,
+ 0x10AD4: 68,
+ 0x10AD5: 68,
+ 0x10AD6: 68,
+ 0x10AD7: 76,
+ 0x10AD8: 68,
+ 0x10AD9: 68,
+ 0x10ADA: 68,
+ 0x10ADB: 68,
+ 0x10ADC: 68,
+ 0x10ADD: 82,
+ 0x10ADE: 68,
+ 0x10ADF: 68,
+ 0x10AE0: 68,
+ 0x10AE1: 82,
+ 0x10AE4: 82,
+ 0x10AE5: 84,
+ 0x10AE6: 84,
+ 0x10AEB: 68,
+ 0x10AEC: 68,
+ 0x10AED: 68,
+ 0x10AEE: 68,
+ 0x10AEF: 82,
+ 0x10B80: 68,
+ 0x10B81: 82,
+ 0x10B82: 68,
+ 0x10B83: 82,
+ 0x10B84: 82,
+ 0x10B85: 82,
+ 0x10B86: 68,
+ 0x10B87: 68,
+ 0x10B88: 68,
+ 0x10B89: 82,
+ 0x10B8A: 68,
+ 0x10B8B: 68,
+ 0x10B8C: 82,
+ 0x10B8D: 68,
+ 0x10B8E: 82,
+ 0x10B8F: 82,
+ 0x10B90: 68,
+ 0x10B91: 82,
+ 0x10BA9: 82,
+ 0x10BAA: 82,
+ 0x10BAB: 82,
+ 0x10BAC: 82,
+ 0x10BAD: 68,
+ 0x10BAE: 68,
+ 0x10D00: 76,
+ 0x10D01: 68,
+ 0x10D02: 68,
+ 0x10D03: 68,
+ 0x10D04: 68,
+ 0x10D05: 68,
+ 0x10D06: 68,
+ 0x10D07: 68,
+ 0x10D08: 68,
+ 0x10D09: 68,
+ 0x10D0A: 68,
+ 0x10D0B: 68,
+ 0x10D0C: 68,
+ 0x10D0D: 68,
+ 0x10D0E: 68,
+ 0x10D0F: 68,
+ 0x10D10: 68,
+ 0x10D11: 68,
+ 0x10D12: 68,
+ 0x10D13: 68,
+ 0x10D14: 68,
+ 0x10D15: 68,
+ 0x10D16: 68,
+ 0x10D17: 68,
+ 0x10D18: 68,
+ 0x10D19: 68,
+ 0x10D1A: 68,
+ 0x10D1B: 68,
+ 0x10D1C: 68,
+ 0x10D1D: 68,
+ 0x10D1E: 68,
+ 0x10D1F: 68,
+ 0x10D20: 68,
+ 0x10D21: 68,
+ 0x10D22: 82,
+ 0x10D23: 68,
+ 0x10D24: 84,
+ 0x10D25: 84,
+ 0x10D26: 84,
+ 0x10D27: 84,
+ 0x10EAB: 84,
+ 0x10EAC: 84,
+ 0x10EFD: 84,
+ 0x10EFE: 84,
+ 0x10EFF: 84,
+ 0x10F30: 68,
+ 0x10F31: 68,
+ 0x10F32: 68,
+ 0x10F33: 82,
+ 0x10F34: 68,
+ 0x10F35: 68,
+ 0x10F36: 68,
+ 0x10F37: 68,
+ 0x10F38: 68,
+ 0x10F39: 68,
+ 0x10F3A: 68,
+ 0x10F3B: 68,
+ 0x10F3C: 68,
+ 0x10F3D: 68,
+ 0x10F3E: 68,
+ 0x10F3F: 68,
+ 0x10F40: 68,
+ 0x10F41: 68,
+ 0x10F42: 68,
+ 0x10F43: 68,
+ 0x10F44: 68,
+ 0x10F46: 84,
+ 0x10F47: 84,
+ 0x10F48: 84,
+ 0x10F49: 84,
+ 0x10F4A: 84,
+ 0x10F4B: 84,
+ 0x10F4C: 84,
+ 0x10F4D: 84,
+ 0x10F4E: 84,
+ 0x10F4F: 84,
+ 0x10F50: 84,
+ 0x10F51: 68,
+ 0x10F52: 68,
+ 0x10F53: 68,
+ 0x10F54: 82,
+ 0x10F70: 68,
+ 0x10F71: 68,
+ 0x10F72: 68,
+ 0x10F73: 68,
+ 0x10F74: 82,
+ 0x10F75: 82,
+ 0x10F76: 68,
+ 0x10F77: 68,
+ 0x10F78: 68,
+ 0x10F79: 68,
+ 0x10F7A: 68,
+ 0x10F7B: 68,
+ 0x10F7C: 68,
+ 0x10F7D: 68,
+ 0x10F7E: 68,
+ 0x10F7F: 68,
+ 0x10F80: 68,
+ 0x10F81: 68,
+ 0x10F82: 84,
+ 0x10F83: 84,
+ 0x10F84: 84,
+ 0x10F85: 84,
+ 0x10FB0: 68,
+ 0x10FB2: 68,
+ 0x10FB3: 68,
+ 0x10FB4: 82,
+ 0x10FB5: 82,
+ 0x10FB6: 82,
+ 0x10FB8: 68,
+ 0x10FB9: 82,
+ 0x10FBA: 82,
+ 0x10FBB: 68,
+ 0x10FBC: 68,
+ 0x10FBD: 82,
+ 0x10FBE: 68,
+ 0x10FBF: 68,
+ 0x10FC1: 68,
+ 0x10FC2: 82,
+ 0x10FC3: 82,
+ 0x10FC4: 68,
+ 0x10FC9: 82,
+ 0x10FCA: 68,
+ 0x10FCB: 76,
+ 0x11001: 84,
+ 0x11038: 84,
+ 0x11039: 84,
+ 0x1103A: 84,
+ 0x1103B: 84,
+ 0x1103C: 84,
+ 0x1103D: 84,
+ 0x1103E: 84,
+ 0x1103F: 84,
+ 0x11040: 84,
+ 0x11041: 84,
+ 0x11042: 84,
+ 0x11043: 84,
+ 0x11044: 84,
+ 0x11045: 84,
+ 0x11046: 84,
+ 0x11070: 84,
+ 0x11073: 84,
+ 0x11074: 84,
+ 0x1107F: 84,
+ 0x11080: 84,
+ 0x11081: 84,
+ 0x110B3: 84,
+ 0x110B4: 84,
+ 0x110B5: 84,
+ 0x110B6: 84,
+ 0x110B9: 84,
+ 0x110BA: 84,
+ 0x110C2: 84,
+ 0x11100: 84,
+ 0x11101: 84,
+ 0x11102: 84,
+ 0x11127: 84,
+ 0x11128: 84,
+ 0x11129: 84,
+ 0x1112A: 84,
+ 0x1112B: 84,
+ 0x1112D: 84,
+ 0x1112E: 84,
+ 0x1112F: 84,
+ 0x11130: 84,
+ 0x11131: 84,
+ 0x11132: 84,
+ 0x11133: 84,
+ 0x11134: 84,
+ 0x11173: 84,
+ 0x11180: 84,
+ 0x11181: 84,
+ 0x111B6: 84,
+ 0x111B7: 84,
+ 0x111B8: 84,
+ 0x111B9: 84,
+ 0x111BA: 84,
+ 0x111BB: 84,
+ 0x111BC: 84,
+ 0x111BD: 84,
+ 0x111BE: 84,
+ 0x111C9: 84,
+ 0x111CA: 84,
+ 0x111CB: 84,
+ 0x111CC: 84,
+ 0x111CF: 84,
+ 0x1122F: 84,
+ 0x11230: 84,
+ 0x11231: 84,
+ 0x11234: 84,
+ 0x11236: 84,
+ 0x11237: 84,
+ 0x1123E: 84,
+ 0x11241: 84,
+ 0x112DF: 84,
+ 0x112E3: 84,
+ 0x112E4: 84,
+ 0x112E5: 84,
+ 0x112E6: 84,
+ 0x112E7: 84,
+ 0x112E8: 84,
+ 0x112E9: 84,
+ 0x112EA: 84,
+ 0x11300: 84,
+ 0x11301: 84,
+ 0x1133B: 84,
+ 0x1133C: 84,
+ 0x11340: 84,
+ 0x11366: 84,
+ 0x11367: 84,
+ 0x11368: 84,
+ 0x11369: 84,
+ 0x1136A: 84,
+ 0x1136B: 84,
+ 0x1136C: 84,
+ 0x11370: 84,
+ 0x11371: 84,
+ 0x11372: 84,
+ 0x11373: 84,
+ 0x11374: 84,
+ 0x11438: 84,
+ 0x11439: 84,
+ 0x1143A: 84,
+ 0x1143B: 84,
+ 0x1143C: 84,
+ 0x1143D: 84,
+ 0x1143E: 84,
+ 0x1143F: 84,
+ 0x11442: 84,
+ 0x11443: 84,
+ 0x11444: 84,
+ 0x11446: 84,
+ 0x1145E: 84,
+ 0x114B3: 84,
+ 0x114B4: 84,
+ 0x114B5: 84,
+ 0x114B6: 84,
+ 0x114B7: 84,
+ 0x114B8: 84,
+ 0x114BA: 84,
+ 0x114BF: 84,
+ 0x114C0: 84,
+ 0x114C2: 84,
+ 0x114C3: 84,
+ 0x115B2: 84,
+ 0x115B3: 84,
+ 0x115B4: 84,
+ 0x115B5: 84,
+ 0x115BC: 84,
+ 0x115BD: 84,
+ 0x115BF: 84,
+ 0x115C0: 84,
+ 0x115DC: 84,
+ 0x115DD: 84,
+ 0x11633: 84,
+ 0x11634: 84,
+ 0x11635: 84,
+ 0x11636: 84,
+ 0x11637: 84,
+ 0x11638: 84,
+ 0x11639: 84,
+ 0x1163A: 84,
+ 0x1163D: 84,
+ 0x1163F: 84,
+ 0x11640: 84,
+ 0x116AB: 84,
+ 0x116AD: 84,
+ 0x116B0: 84,
+ 0x116B1: 84,
+ 0x116B2: 84,
+ 0x116B3: 84,
+ 0x116B4: 84,
+ 0x116B5: 84,
+ 0x116B7: 84,
+ 0x1171D: 84,
+ 0x1171E: 84,
+ 0x1171F: 84,
+ 0x11722: 84,
+ 0x11723: 84,
+ 0x11724: 84,
+ 0x11725: 84,
+ 0x11727: 84,
+ 0x11728: 84,
+ 0x11729: 84,
+ 0x1172A: 84,
+ 0x1172B: 84,
+ 0x1182F: 84,
+ 0x11830: 84,
+ 0x11831: 84,
+ 0x11832: 84,
+ 0x11833: 84,
+ 0x11834: 84,
+ 0x11835: 84,
+ 0x11836: 84,
+ 0x11837: 84,
+ 0x11839: 84,
+ 0x1183A: 84,
+ 0x1193B: 84,
+ 0x1193C: 84,
+ 0x1193E: 84,
+ 0x11943: 84,
+ 0x119D4: 84,
+ 0x119D5: 84,
+ 0x119D6: 84,
+ 0x119D7: 84,
+ 0x119DA: 84,
+ 0x119DB: 84,
+ 0x119E0: 84,
+ 0x11A01: 84,
+ 0x11A02: 84,
+ 0x11A03: 84,
+ 0x11A04: 84,
+ 0x11A05: 84,
+ 0x11A06: 84,
+ 0x11A07: 84,
+ 0x11A08: 84,
+ 0x11A09: 84,
+ 0x11A0A: 84,
+ 0x11A33: 84,
+ 0x11A34: 84,
+ 0x11A35: 84,
+ 0x11A36: 84,
+ 0x11A37: 84,
+ 0x11A38: 84,
+ 0x11A3B: 84,
+ 0x11A3C: 84,
+ 0x11A3D: 84,
+ 0x11A3E: 84,
+ 0x11A47: 84,
+ 0x11A51: 84,
+ 0x11A52: 84,
+ 0x11A53: 84,
+ 0x11A54: 84,
+ 0x11A55: 84,
+ 0x11A56: 84,
+ 0x11A59: 84,
+ 0x11A5A: 84,
+ 0x11A5B: 84,
+ 0x11A8A: 84,
+ 0x11A8B: 84,
+ 0x11A8C: 84,
+ 0x11A8D: 84,
+ 0x11A8E: 84,
+ 0x11A8F: 84,
+ 0x11A90: 84,
+ 0x11A91: 84,
+ 0x11A92: 84,
+ 0x11A93: 84,
+ 0x11A94: 84,
+ 0x11A95: 84,
+ 0x11A96: 84,
+ 0x11A98: 84,
+ 0x11A99: 84,
+ 0x11C30: 84,
+ 0x11C31: 84,
+ 0x11C32: 84,
+ 0x11C33: 84,
+ 0x11C34: 84,
+ 0x11C35: 84,
+ 0x11C36: 84,
+ 0x11C38: 84,
+ 0x11C39: 84,
+ 0x11C3A: 84,
+ 0x11C3B: 84,
+ 0x11C3C: 84,
+ 0x11C3D: 84,
+ 0x11C3F: 84,
+ 0x11C92: 84,
+ 0x11C93: 84,
+ 0x11C94: 84,
+ 0x11C95: 84,
+ 0x11C96: 84,
+ 0x11C97: 84,
+ 0x11C98: 84,
+ 0x11C99: 84,
+ 0x11C9A: 84,
+ 0x11C9B: 84,
+ 0x11C9C: 84,
+ 0x11C9D: 84,
+ 0x11C9E: 84,
+ 0x11C9F: 84,
+ 0x11CA0: 84,
+ 0x11CA1: 84,
+ 0x11CA2: 84,
+ 0x11CA3: 84,
+ 0x11CA4: 84,
+ 0x11CA5: 84,
+ 0x11CA6: 84,
+ 0x11CA7: 84,
+ 0x11CAA: 84,
+ 0x11CAB: 84,
+ 0x11CAC: 84,
+ 0x11CAD: 84,
+ 0x11CAE: 84,
+ 0x11CAF: 84,
+ 0x11CB0: 84,
+ 0x11CB2: 84,
+ 0x11CB3: 84,
+ 0x11CB5: 84,
+ 0x11CB6: 84,
+ 0x11D31: 84,
+ 0x11D32: 84,
+ 0x11D33: 84,
+ 0x11D34: 84,
+ 0x11D35: 84,
+ 0x11D36: 84,
+ 0x11D3A: 84,
+ 0x11D3C: 84,
+ 0x11D3D: 84,
+ 0x11D3F: 84,
+ 0x11D40: 84,
+ 0x11D41: 84,
+ 0x11D42: 84,
+ 0x11D43: 84,
+ 0x11D44: 84,
+ 0x11D45: 84,
+ 0x11D47: 84,
+ 0x11D90: 84,
+ 0x11D91: 84,
+ 0x11D95: 84,
+ 0x11D97: 84,
+ 0x11EF3: 84,
+ 0x11EF4: 84,
+ 0x11F00: 84,
+ 0x11F01: 84,
+ 0x11F36: 84,
+ 0x11F37: 84,
+ 0x11F38: 84,
+ 0x11F39: 84,
+ 0x11F3A: 84,
+ 0x11F40: 84,
+ 0x11F42: 84,
+ 0x13430: 84,
+ 0x13431: 84,
+ 0x13432: 84,
+ 0x13433: 84,
+ 0x13434: 84,
+ 0x13435: 84,
+ 0x13436: 84,
+ 0x13437: 84,
+ 0x13438: 84,
+ 0x13439: 84,
+ 0x1343A: 84,
+ 0x1343B: 84,
+ 0x1343C: 84,
+ 0x1343D: 84,
+ 0x1343E: 84,
+ 0x1343F: 84,
+ 0x13440: 84,
+ 0x13447: 84,
+ 0x13448: 84,
+ 0x13449: 84,
+ 0x1344A: 84,
+ 0x1344B: 84,
+ 0x1344C: 84,
+ 0x1344D: 84,
+ 0x1344E: 84,
+ 0x1344F: 84,
+ 0x13450: 84,
+ 0x13451: 84,
+ 0x13452: 84,
+ 0x13453: 84,
+ 0x13454: 84,
+ 0x13455: 84,
+ 0x16AF0: 84,
+ 0x16AF1: 84,
+ 0x16AF2: 84,
+ 0x16AF3: 84,
+ 0x16AF4: 84,
+ 0x16B30: 84,
+ 0x16B31: 84,
+ 0x16B32: 84,
+ 0x16B33: 84,
+ 0x16B34: 84,
+ 0x16B35: 84,
+ 0x16B36: 84,
+ 0x16F4F: 84,
+ 0x16F8F: 84,
+ 0x16F90: 84,
+ 0x16F91: 84,
+ 0x16F92: 84,
+ 0x16FE4: 84,
+ 0x1BC9D: 84,
+ 0x1BC9E: 84,
+ 0x1BCA0: 84,
+ 0x1BCA1: 84,
+ 0x1BCA2: 84,
+ 0x1BCA3: 84,
+ 0x1CF00: 84,
+ 0x1CF01: 84,
+ 0x1CF02: 84,
+ 0x1CF03: 84,
+ 0x1CF04: 84,
+ 0x1CF05: 84,
+ 0x1CF06: 84,
+ 0x1CF07: 84,
+ 0x1CF08: 84,
+ 0x1CF09: 84,
+ 0x1CF0A: 84,
+ 0x1CF0B: 84,
+ 0x1CF0C: 84,
+ 0x1CF0D: 84,
+ 0x1CF0E: 84,
+ 0x1CF0F: 84,
+ 0x1CF10: 84,
+ 0x1CF11: 84,
+ 0x1CF12: 84,
+ 0x1CF13: 84,
+ 0x1CF14: 84,
+ 0x1CF15: 84,
+ 0x1CF16: 84,
+ 0x1CF17: 84,
+ 0x1CF18: 84,
+ 0x1CF19: 84,
+ 0x1CF1A: 84,
+ 0x1CF1B: 84,
+ 0x1CF1C: 84,
+ 0x1CF1D: 84,
+ 0x1CF1E: 84,
+ 0x1CF1F: 84,
+ 0x1CF20: 84,
+ 0x1CF21: 84,
+ 0x1CF22: 84,
+ 0x1CF23: 84,
+ 0x1CF24: 84,
+ 0x1CF25: 84,
+ 0x1CF26: 84,
+ 0x1CF27: 84,
+ 0x1CF28: 84,
+ 0x1CF29: 84,
+ 0x1CF2A: 84,
+ 0x1CF2B: 84,
+ 0x1CF2C: 84,
+ 0x1CF2D: 84,
+ 0x1CF30: 84,
+ 0x1CF31: 84,
+ 0x1CF32: 84,
+ 0x1CF33: 84,
+ 0x1CF34: 84,
+ 0x1CF35: 84,
+ 0x1CF36: 84,
+ 0x1CF37: 84,
+ 0x1CF38: 84,
+ 0x1CF39: 84,
+ 0x1CF3A: 84,
+ 0x1CF3B: 84,
+ 0x1CF3C: 84,
+ 0x1CF3D: 84,
+ 0x1CF3E: 84,
+ 0x1CF3F: 84,
+ 0x1CF40: 84,
+ 0x1CF41: 84,
+ 0x1CF42: 84,
+ 0x1CF43: 84,
+ 0x1CF44: 84,
+ 0x1CF45: 84,
+ 0x1CF46: 84,
+ 0x1D167: 84,
+ 0x1D168: 84,
+ 0x1D169: 84,
+ 0x1D173: 84,
+ 0x1D174: 84,
+ 0x1D175: 84,
+ 0x1D176: 84,
+ 0x1D177: 84,
+ 0x1D178: 84,
+ 0x1D179: 84,
+ 0x1D17A: 84,
+ 0x1D17B: 84,
+ 0x1D17C: 84,
+ 0x1D17D: 84,
+ 0x1D17E: 84,
+ 0x1D17F: 84,
+ 0x1D180: 84,
+ 0x1D181: 84,
+ 0x1D182: 84,
+ 0x1D185: 84,
+ 0x1D186: 84,
+ 0x1D187: 84,
+ 0x1D188: 84,
+ 0x1D189: 84,
+ 0x1D18A: 84,
+ 0x1D18B: 84,
+ 0x1D1AA: 84,
+ 0x1D1AB: 84,
+ 0x1D1AC: 84,
+ 0x1D1AD: 84,
+ 0x1D242: 84,
+ 0x1D243: 84,
+ 0x1D244: 84,
+ 0x1DA00: 84,
+ 0x1DA01: 84,
+ 0x1DA02: 84,
+ 0x1DA03: 84,
+ 0x1DA04: 84,
+ 0x1DA05: 84,
+ 0x1DA06: 84,
+ 0x1DA07: 84,
+ 0x1DA08: 84,
+ 0x1DA09: 84,
+ 0x1DA0A: 84,
+ 0x1DA0B: 84,
+ 0x1DA0C: 84,
+ 0x1DA0D: 84,
+ 0x1DA0E: 84,
+ 0x1DA0F: 84,
+ 0x1DA10: 84,
+ 0x1DA11: 84,
+ 0x1DA12: 84,
+ 0x1DA13: 84,
+ 0x1DA14: 84,
+ 0x1DA15: 84,
+ 0x1DA16: 84,
+ 0x1DA17: 84,
+ 0x1DA18: 84,
+ 0x1DA19: 84,
+ 0x1DA1A: 84,
+ 0x1DA1B: 84,
+ 0x1DA1C: 84,
+ 0x1DA1D: 84,
+ 0x1DA1E: 84,
+ 0x1DA1F: 84,
+ 0x1DA20: 84,
+ 0x1DA21: 84,
+ 0x1DA22: 84,
+ 0x1DA23: 84,
+ 0x1DA24: 84,
+ 0x1DA25: 84,
+ 0x1DA26: 84,
+ 0x1DA27: 84,
+ 0x1DA28: 84,
+ 0x1DA29: 84,
+ 0x1DA2A: 84,
+ 0x1DA2B: 84,
+ 0x1DA2C: 84,
+ 0x1DA2D: 84,
+ 0x1DA2E: 84,
+ 0x1DA2F: 84,
+ 0x1DA30: 84,
+ 0x1DA31: 84,
+ 0x1DA32: 84,
+ 0x1DA33: 84,
+ 0x1DA34: 84,
+ 0x1DA35: 84,
+ 0x1DA36: 84,
+ 0x1DA3B: 84,
+ 0x1DA3C: 84,
+ 0x1DA3D: 84,
+ 0x1DA3E: 84,
+ 0x1DA3F: 84,
+ 0x1DA40: 84,
+ 0x1DA41: 84,
+ 0x1DA42: 84,
+ 0x1DA43: 84,
+ 0x1DA44: 84,
+ 0x1DA45: 84,
+ 0x1DA46: 84,
+ 0x1DA47: 84,
+ 0x1DA48: 84,
+ 0x1DA49: 84,
+ 0x1DA4A: 84,
+ 0x1DA4B: 84,
+ 0x1DA4C: 84,
+ 0x1DA4D: 84,
+ 0x1DA4E: 84,
+ 0x1DA4F: 84,
+ 0x1DA50: 84,
+ 0x1DA51: 84,
+ 0x1DA52: 84,
+ 0x1DA53: 84,
+ 0x1DA54: 84,
+ 0x1DA55: 84,
+ 0x1DA56: 84,
+ 0x1DA57: 84,
+ 0x1DA58: 84,
+ 0x1DA59: 84,
+ 0x1DA5A: 84,
+ 0x1DA5B: 84,
+ 0x1DA5C: 84,
+ 0x1DA5D: 84,
+ 0x1DA5E: 84,
+ 0x1DA5F: 84,
+ 0x1DA60: 84,
+ 0x1DA61: 84,
+ 0x1DA62: 84,
+ 0x1DA63: 84,
+ 0x1DA64: 84,
+ 0x1DA65: 84,
+ 0x1DA66: 84,
+ 0x1DA67: 84,
+ 0x1DA68: 84,
+ 0x1DA69: 84,
+ 0x1DA6A: 84,
+ 0x1DA6B: 84,
+ 0x1DA6C: 84,
+ 0x1DA75: 84,
+ 0x1DA84: 84,
+ 0x1DA9B: 84,
+ 0x1DA9C: 84,
+ 0x1DA9D: 84,
+ 0x1DA9E: 84,
+ 0x1DA9F: 84,
+ 0x1DAA1: 84,
+ 0x1DAA2: 84,
+ 0x1DAA3: 84,
+ 0x1DAA4: 84,
+ 0x1DAA5: 84,
+ 0x1DAA6: 84,
+ 0x1DAA7: 84,
+ 0x1DAA8: 84,
+ 0x1DAA9: 84,
+ 0x1DAAA: 84,
+ 0x1DAAB: 84,
+ 0x1DAAC: 84,
+ 0x1DAAD: 84,
+ 0x1DAAE: 84,
+ 0x1DAAF: 84,
+ 0x1E000: 84,
+ 0x1E001: 84,
+ 0x1E002: 84,
+ 0x1E003: 84,
+ 0x1E004: 84,
+ 0x1E005: 84,
+ 0x1E006: 84,
+ 0x1E008: 84,
+ 0x1E009: 84,
+ 0x1E00A: 84,
+ 0x1E00B: 84,
+ 0x1E00C: 84,
+ 0x1E00D: 84,
+ 0x1E00E: 84,
+ 0x1E00F: 84,
+ 0x1E010: 84,
+ 0x1E011: 84,
+ 0x1E012: 84,
+ 0x1E013: 84,
+ 0x1E014: 84,
+ 0x1E015: 84,
+ 0x1E016: 84,
+ 0x1E017: 84,
+ 0x1E018: 84,
+ 0x1E01B: 84,
+ 0x1E01C: 84,
+ 0x1E01D: 84,
+ 0x1E01E: 84,
+ 0x1E01F: 84,
+ 0x1E020: 84,
+ 0x1E021: 84,
+ 0x1E023: 84,
+ 0x1E024: 84,
+ 0x1E026: 84,
+ 0x1E027: 84,
+ 0x1E028: 84,
+ 0x1E029: 84,
+ 0x1E02A: 84,
+ 0x1E08F: 84,
+ 0x1E130: 84,
+ 0x1E131: 84,
+ 0x1E132: 84,
+ 0x1E133: 84,
+ 0x1E134: 84,
+ 0x1E135: 84,
+ 0x1E136: 84,
+ 0x1E2AE: 84,
+ 0x1E2EC: 84,
+ 0x1E2ED: 84,
+ 0x1E2EE: 84,
+ 0x1E2EF: 84,
+ 0x1E4EC: 84,
+ 0x1E4ED: 84,
+ 0x1E4EE: 84,
+ 0x1E4EF: 84,
+ 0x1E8D0: 84,
+ 0x1E8D1: 84,
+ 0x1E8D2: 84,
+ 0x1E8D3: 84,
+ 0x1E8D4: 84,
+ 0x1E8D5: 84,
+ 0x1E8D6: 84,
+ 0x1E900: 68,
+ 0x1E901: 68,
+ 0x1E902: 68,
+ 0x1E903: 68,
+ 0x1E904: 68,
+ 0x1E905: 68,
+ 0x1E906: 68,
+ 0x1E907: 68,
+ 0x1E908: 68,
+ 0x1E909: 68,
+ 0x1E90A: 68,
+ 0x1E90B: 68,
+ 0x1E90C: 68,
+ 0x1E90D: 68,
+ 0x1E90E: 68,
+ 0x1E90F: 68,
+ 0x1E910: 68,
+ 0x1E911: 68,
+ 0x1E912: 68,
+ 0x1E913: 68,
+ 0x1E914: 68,
+ 0x1E915: 68,
+ 0x1E916: 68,
+ 0x1E917: 68,
+ 0x1E918: 68,
+ 0x1E919: 68,
+ 0x1E91A: 68,
+ 0x1E91B: 68,
+ 0x1E91C: 68,
+ 0x1E91D: 68,
+ 0x1E91E: 68,
+ 0x1E91F: 68,
+ 0x1E920: 68,
+ 0x1E921: 68,
+ 0x1E922: 68,
+ 0x1E923: 68,
+ 0x1E924: 68,
+ 0x1E925: 68,
+ 0x1E926: 68,
+ 0x1E927: 68,
+ 0x1E928: 68,
+ 0x1E929: 68,
+ 0x1E92A: 68,
+ 0x1E92B: 68,
+ 0x1E92C: 68,
+ 0x1E92D: 68,
+ 0x1E92E: 68,
+ 0x1E92F: 68,
+ 0x1E930: 68,
+ 0x1E931: 68,
+ 0x1E932: 68,
+ 0x1E933: 68,
+ 0x1E934: 68,
+ 0x1E935: 68,
+ 0x1E936: 68,
+ 0x1E937: 68,
+ 0x1E938: 68,
+ 0x1E939: 68,
+ 0x1E93A: 68,
+ 0x1E93B: 68,
+ 0x1E93C: 68,
+ 0x1E93D: 68,
+ 0x1E93E: 68,
+ 0x1E93F: 68,
+ 0x1E940: 68,
+ 0x1E941: 68,
+ 0x1E942: 68,
+ 0x1E943: 68,
+ 0x1E944: 84,
+ 0x1E945: 84,
+ 0x1E946: 84,
+ 0x1E947: 84,
+ 0x1E948: 84,
+ 0x1E949: 84,
+ 0x1E94A: 84,
+ 0x1E94B: 84,
+ 0xE0001: 84,
+ 0xE0020: 84,
+ 0xE0021: 84,
+ 0xE0022: 84,
+ 0xE0023: 84,
+ 0xE0024: 84,
+ 0xE0025: 84,
+ 0xE0026: 84,
+ 0xE0027: 84,
+ 0xE0028: 84,
+ 0xE0029: 84,
+ 0xE002A: 84,
+ 0xE002B: 84,
+ 0xE002C: 84,
+ 0xE002D: 84,
+ 0xE002E: 84,
+ 0xE002F: 84,
+ 0xE0030: 84,
+ 0xE0031: 84,
+ 0xE0032: 84,
+ 0xE0033: 84,
+ 0xE0034: 84,
+ 0xE0035: 84,
+ 0xE0036: 84,
+ 0xE0037: 84,
+ 0xE0038: 84,
+ 0xE0039: 84,
+ 0xE003A: 84,
+ 0xE003B: 84,
+ 0xE003C: 84,
+ 0xE003D: 84,
+ 0xE003E: 84,
+ 0xE003F: 84,
+ 0xE0040: 84,
+ 0xE0041: 84,
+ 0xE0042: 84,
+ 0xE0043: 84,
+ 0xE0044: 84,
+ 0xE0045: 84,
+ 0xE0046: 84,
+ 0xE0047: 84,
+ 0xE0048: 84,
+ 0xE0049: 84,
+ 0xE004A: 84,
+ 0xE004B: 84,
+ 0xE004C: 84,
+ 0xE004D: 84,
+ 0xE004E: 84,
+ 0xE004F: 84,
+ 0xE0050: 84,
+ 0xE0051: 84,
+ 0xE0052: 84,
+ 0xE0053: 84,
+ 0xE0054: 84,
+ 0xE0055: 84,
+ 0xE0056: 84,
+ 0xE0057: 84,
+ 0xE0058: 84,
+ 0xE0059: 84,
+ 0xE005A: 84,
+ 0xE005B: 84,
+ 0xE005C: 84,
+ 0xE005D: 84,
+ 0xE005E: 84,
+ 0xE005F: 84,
+ 0xE0060: 84,
+ 0xE0061: 84,
+ 0xE0062: 84,
+ 0xE0063: 84,
+ 0xE0064: 84,
+ 0xE0065: 84,
+ 0xE0066: 84,
+ 0xE0067: 84,
+ 0xE0068: 84,
+ 0xE0069: 84,
+ 0xE006A: 84,
+ 0xE006B: 84,
+ 0xE006C: 84,
+ 0xE006D: 84,
+ 0xE006E: 84,
+ 0xE006F: 84,
+ 0xE0070: 84,
+ 0xE0071: 84,
+ 0xE0072: 84,
+ 0xE0073: 84,
+ 0xE0074: 84,
+ 0xE0075: 84,
+ 0xE0076: 84,
+ 0xE0077: 84,
+ 0xE0078: 84,
+ 0xE0079: 84,
+ 0xE007A: 84,
+ 0xE007B: 84,
+ 0xE007C: 84,
+ 0xE007D: 84,
+ 0xE007E: 84,
+ 0xE007F: 84,
+ 0xE0100: 84,
+ 0xE0101: 84,
+ 0xE0102: 84,
+ 0xE0103: 84,
+ 0xE0104: 84,
+ 0xE0105: 84,
+ 0xE0106: 84,
+ 0xE0107: 84,
+ 0xE0108: 84,
+ 0xE0109: 84,
+ 0xE010A: 84,
+ 0xE010B: 84,
+ 0xE010C: 84,
+ 0xE010D: 84,
+ 0xE010E: 84,
+ 0xE010F: 84,
+ 0xE0110: 84,
+ 0xE0111: 84,
+ 0xE0112: 84,
+ 0xE0113: 84,
+ 0xE0114: 84,
+ 0xE0115: 84,
+ 0xE0116: 84,
+ 0xE0117: 84,
+ 0xE0118: 84,
+ 0xE0119: 84,
+ 0xE011A: 84,
+ 0xE011B: 84,
+ 0xE011C: 84,
+ 0xE011D: 84,
+ 0xE011E: 84,
+ 0xE011F: 84,
+ 0xE0120: 84,
+ 0xE0121: 84,
+ 0xE0122: 84,
+ 0xE0123: 84,
+ 0xE0124: 84,
+ 0xE0125: 84,
+ 0xE0126: 84,
+ 0xE0127: 84,
+ 0xE0128: 84,
+ 0xE0129: 84,
+ 0xE012A: 84,
+ 0xE012B: 84,
+ 0xE012C: 84,
+ 0xE012D: 84,
+ 0xE012E: 84,
+ 0xE012F: 84,
+ 0xE0130: 84,
+ 0xE0131: 84,
+ 0xE0132: 84,
+ 0xE0133: 84,
+ 0xE0134: 84,
+ 0xE0135: 84,
+ 0xE0136: 84,
+ 0xE0137: 84,
+ 0xE0138: 84,
+ 0xE0139: 84,
+ 0xE013A: 84,
+ 0xE013B: 84,
+ 0xE013C: 84,
+ 0xE013D: 84,
+ 0xE013E: 84,
+ 0xE013F: 84,
+ 0xE0140: 84,
+ 0xE0141: 84,
+ 0xE0142: 84,
+ 0xE0143: 84,
+ 0xE0144: 84,
+ 0xE0145: 84,
+ 0xE0146: 84,
+ 0xE0147: 84,
+ 0xE0148: 84,
+ 0xE0149: 84,
+ 0xE014A: 84,
+ 0xE014B: 84,
+ 0xE014C: 84,
+ 0xE014D: 84,
+ 0xE014E: 84,
+ 0xE014F: 84,
+ 0xE0150: 84,
+ 0xE0151: 84,
+ 0xE0152: 84,
+ 0xE0153: 84,
+ 0xE0154: 84,
+ 0xE0155: 84,
+ 0xE0156: 84,
+ 0xE0157: 84,
+ 0xE0158: 84,
+ 0xE0159: 84,
+ 0xE015A: 84,
+ 0xE015B: 84,
+ 0xE015C: 84,
+ 0xE015D: 84,
+ 0xE015E: 84,
+ 0xE015F: 84,
+ 0xE0160: 84,
+ 0xE0161: 84,
+ 0xE0162: 84,
+ 0xE0163: 84,
+ 0xE0164: 84,
+ 0xE0165: 84,
+ 0xE0166: 84,
+ 0xE0167: 84,
+ 0xE0168: 84,
+ 0xE0169: 84,
+ 0xE016A: 84,
+ 0xE016B: 84,
+ 0xE016C: 84,
+ 0xE016D: 84,
+ 0xE016E: 84,
+ 0xE016F: 84,
+ 0xE0170: 84,
+ 0xE0171: 84,
+ 0xE0172: 84,
+ 0xE0173: 84,
+ 0xE0174: 84,
+ 0xE0175: 84,
+ 0xE0176: 84,
+ 0xE0177: 84,
+ 0xE0178: 84,
+ 0xE0179: 84,
+ 0xE017A: 84,
+ 0xE017B: 84,
+ 0xE017C: 84,
+ 0xE017D: 84,
+ 0xE017E: 84,
+ 0xE017F: 84,
+ 0xE0180: 84,
+ 0xE0181: 84,
+ 0xE0182: 84,
+ 0xE0183: 84,
+ 0xE0184: 84,
+ 0xE0185: 84,
+ 0xE0186: 84,
+ 0xE0187: 84,
+ 0xE0188: 84,
+ 0xE0189: 84,
+ 0xE018A: 84,
+ 0xE018B: 84,
+ 0xE018C: 84,
+ 0xE018D: 84,
+ 0xE018E: 84,
+ 0xE018F: 84,
+ 0xE0190: 84,
+ 0xE0191: 84,
+ 0xE0192: 84,
+ 0xE0193: 84,
+ 0xE0194: 84,
+ 0xE0195: 84,
+ 0xE0196: 84,
+ 0xE0197: 84,
+ 0xE0198: 84,
+ 0xE0199: 84,
+ 0xE019A: 84,
+ 0xE019B: 84,
+ 0xE019C: 84,
+ 0xE019D: 84,
+ 0xE019E: 84,
+ 0xE019F: 84,
+ 0xE01A0: 84,
+ 0xE01A1: 84,
+ 0xE01A2: 84,
+ 0xE01A3: 84,
+ 0xE01A4: 84,
+ 0xE01A5: 84,
+ 0xE01A6: 84,
+ 0xE01A7: 84,
+ 0xE01A8: 84,
+ 0xE01A9: 84,
+ 0xE01AA: 84,
+ 0xE01AB: 84,
+ 0xE01AC: 84,
+ 0xE01AD: 84,
+ 0xE01AE: 84,
+ 0xE01AF: 84,
+ 0xE01B0: 84,
+ 0xE01B1: 84,
+ 0xE01B2: 84,
+ 0xE01B3: 84,
+ 0xE01B4: 84,
+ 0xE01B5: 84,
+ 0xE01B6: 84,
+ 0xE01B7: 84,
+ 0xE01B8: 84,
+ 0xE01B9: 84,
+ 0xE01BA: 84,
+ 0xE01BB: 84,
+ 0xE01BC: 84,
+ 0xE01BD: 84,
+ 0xE01BE: 84,
+ 0xE01BF: 84,
+ 0xE01C0: 84,
+ 0xE01C1: 84,
+ 0xE01C2: 84,
+ 0xE01C3: 84,
+ 0xE01C4: 84,
+ 0xE01C5: 84,
+ 0xE01C6: 84,
+ 0xE01C7: 84,
+ 0xE01C8: 84,
+ 0xE01C9: 84,
+ 0xE01CA: 84,
+ 0xE01CB: 84,
+ 0xE01CC: 84,
+ 0xE01CD: 84,
+ 0xE01CE: 84,
+ 0xE01CF: 84,
+ 0xE01D0: 84,
+ 0xE01D1: 84,
+ 0xE01D2: 84,
+ 0xE01D3: 84,
+ 0xE01D4: 84,
+ 0xE01D5: 84,
+ 0xE01D6: 84,
+ 0xE01D7: 84,
+ 0xE01D8: 84,
+ 0xE01D9: 84,
+ 0xE01DA: 84,
+ 0xE01DB: 84,
+ 0xE01DC: 84,
+ 0xE01DD: 84,
+ 0xE01DE: 84,
+ 0xE01DF: 84,
+ 0xE01E0: 84,
+ 0xE01E1: 84,
+ 0xE01E2: 84,
+ 0xE01E3: 84,
+ 0xE01E4: 84,
+ 0xE01E5: 84,
+ 0xE01E6: 84,
+ 0xE01E7: 84,
+ 0xE01E8: 84,
+ 0xE01E9: 84,
+ 0xE01EA: 84,
+ 0xE01EB: 84,
+ 0xE01EC: 84,
+ 0xE01ED: 84,
+ 0xE01EE: 84,
+ 0xE01EF: 84,
+}
+codepoint_classes = {
+ "PVALID": (
+ 0x2D0000002E,
+ 0x300000003A,
+ 0x610000007B,
+ 0xDF000000F7,
+ 0xF800000100,
+ 0x10100000102,
+ 0x10300000104,
+ 0x10500000106,
+ 0x10700000108,
+ 0x1090000010A,
+ 0x10B0000010C,
+ 0x10D0000010E,
+ 0x10F00000110,
+ 0x11100000112,
+ 0x11300000114,
+ 0x11500000116,
+ 0x11700000118,
+ 0x1190000011A,
+ 0x11B0000011C,
+ 0x11D0000011E,
+ 0x11F00000120,
+ 0x12100000122,
+ 0x12300000124,
+ 0x12500000126,
+ 0x12700000128,
+ 0x1290000012A,
+ 0x12B0000012C,
+ 0x12D0000012E,
+ 0x12F00000130,
+ 0x13100000132,
+ 0x13500000136,
+ 0x13700000139,
+ 0x13A0000013B,
+ 0x13C0000013D,
+ 0x13E0000013F,
+ 0x14200000143,
+ 0x14400000145,
+ 0x14600000147,
+ 0x14800000149,
+ 0x14B0000014C,
+ 0x14D0000014E,
+ 0x14F00000150,
+ 0x15100000152,
+ 0x15300000154,
+ 0x15500000156,
+ 0x15700000158,
+ 0x1590000015A,
+ 0x15B0000015C,
+ 0x15D0000015E,
+ 0x15F00000160,
+ 0x16100000162,
+ 0x16300000164,
+ 0x16500000166,
+ 0x16700000168,
+ 0x1690000016A,
+ 0x16B0000016C,
+ 0x16D0000016E,
+ 0x16F00000170,
+ 0x17100000172,
+ 0x17300000174,
+ 0x17500000176,
+ 0x17700000178,
+ 0x17A0000017B,
+ 0x17C0000017D,
+ 0x17E0000017F,
+ 0x18000000181,
+ 0x18300000184,
+ 0x18500000186,
+ 0x18800000189,
+ 0x18C0000018E,
+ 0x19200000193,
+ 0x19500000196,
+ 0x1990000019C,
+ 0x19E0000019F,
+ 0x1A1000001A2,
+ 0x1A3000001A4,
+ 0x1A5000001A6,
+ 0x1A8000001A9,
+ 0x1AA000001AC,
+ 0x1AD000001AE,
+ 0x1B0000001B1,
+ 0x1B4000001B5,
+ 0x1B6000001B7,
+ 0x1B9000001BC,
+ 0x1BD000001C4,
+ 0x1CE000001CF,
+ 0x1D0000001D1,
+ 0x1D2000001D3,
+ 0x1D4000001D5,
+ 0x1D6000001D7,
+ 0x1D8000001D9,
+ 0x1DA000001DB,
+ 0x1DC000001DE,
+ 0x1DF000001E0,
+ 0x1E1000001E2,
+ 0x1E3000001E4,
+ 0x1E5000001E6,
+ 0x1E7000001E8,
+ 0x1E9000001EA,
+ 0x1EB000001EC,
+ 0x1ED000001EE,
+ 0x1EF000001F1,
+ 0x1F5000001F6,
+ 0x1F9000001FA,
+ 0x1FB000001FC,
+ 0x1FD000001FE,
+ 0x1FF00000200,
+ 0x20100000202,
+ 0x20300000204,
+ 0x20500000206,
+ 0x20700000208,
+ 0x2090000020A,
+ 0x20B0000020C,
+ 0x20D0000020E,
+ 0x20F00000210,
+ 0x21100000212,
+ 0x21300000214,
+ 0x21500000216,
+ 0x21700000218,
+ 0x2190000021A,
+ 0x21B0000021C,
+ 0x21D0000021E,
+ 0x21F00000220,
+ 0x22100000222,
+ 0x22300000224,
+ 0x22500000226,
+ 0x22700000228,
+ 0x2290000022A,
+ 0x22B0000022C,
+ 0x22D0000022E,
+ 0x22F00000230,
+ 0x23100000232,
+ 0x2330000023A,
+ 0x23C0000023D,
+ 0x23F00000241,
+ 0x24200000243,
+ 0x24700000248,
+ 0x2490000024A,
+ 0x24B0000024C,
+ 0x24D0000024E,
+ 0x24F000002B0,
+ 0x2B9000002C2,
+ 0x2C6000002D2,
+ 0x2EC000002ED,
+ 0x2EE000002EF,
+ 0x30000000340,
+ 0x34200000343,
+ 0x3460000034F,
+ 0x35000000370,
+ 0x37100000372,
+ 0x37300000374,
+ 0x37700000378,
+ 0x37B0000037E,
+ 0x39000000391,
+ 0x3AC000003CF,
+ 0x3D7000003D8,
+ 0x3D9000003DA,
+ 0x3DB000003DC,
+ 0x3DD000003DE,
+ 0x3DF000003E0,
+ 0x3E1000003E2,
+ 0x3E3000003E4,
+ 0x3E5000003E6,
+ 0x3E7000003E8,
+ 0x3E9000003EA,
+ 0x3EB000003EC,
+ 0x3ED000003EE,
+ 0x3EF000003F0,
+ 0x3F3000003F4,
+ 0x3F8000003F9,
+ 0x3FB000003FD,
+ 0x43000000460,
+ 0x46100000462,
+ 0x46300000464,
+ 0x46500000466,
+ 0x46700000468,
+ 0x4690000046A,
+ 0x46B0000046C,
+ 0x46D0000046E,
+ 0x46F00000470,
+ 0x47100000472,
+ 0x47300000474,
+ 0x47500000476,
+ 0x47700000478,
+ 0x4790000047A,
+ 0x47B0000047C,
+ 0x47D0000047E,
+ 0x47F00000480,
+ 0x48100000482,
+ 0x48300000488,
+ 0x48B0000048C,
+ 0x48D0000048E,
+ 0x48F00000490,
+ 0x49100000492,
+ 0x49300000494,
+ 0x49500000496,
+ 0x49700000498,
+ 0x4990000049A,
+ 0x49B0000049C,
+ 0x49D0000049E,
+ 0x49F000004A0,
+ 0x4A1000004A2,
+ 0x4A3000004A4,
+ 0x4A5000004A6,
+ 0x4A7000004A8,
+ 0x4A9000004AA,
+ 0x4AB000004AC,
+ 0x4AD000004AE,
+ 0x4AF000004B0,
+ 0x4B1000004B2,
+ 0x4B3000004B4,
+ 0x4B5000004B6,
+ 0x4B7000004B8,
+ 0x4B9000004BA,
+ 0x4BB000004BC,
+ 0x4BD000004BE,
+ 0x4BF000004C0,
+ 0x4C2000004C3,
+ 0x4C4000004C5,
+ 0x4C6000004C7,
+ 0x4C8000004C9,
+ 0x4CA000004CB,
+ 0x4CC000004CD,
+ 0x4CE000004D0,
+ 0x4D1000004D2,
+ 0x4D3000004D4,
+ 0x4D5000004D6,
+ 0x4D7000004D8,
+ 0x4D9000004DA,
+ 0x4DB000004DC,
+ 0x4DD000004DE,
+ 0x4DF000004E0,
+ 0x4E1000004E2,
+ 0x4E3000004E4,
+ 0x4E5000004E6,
+ 0x4E7000004E8,
+ 0x4E9000004EA,
+ 0x4EB000004EC,
+ 0x4ED000004EE,
+ 0x4EF000004F0,
+ 0x4F1000004F2,
+ 0x4F3000004F4,
+ 0x4F5000004F6,
+ 0x4F7000004F8,
+ 0x4F9000004FA,
+ 0x4FB000004FC,
+ 0x4FD000004FE,
+ 0x4FF00000500,
+ 0x50100000502,
+ 0x50300000504,
+ 0x50500000506,
+ 0x50700000508,
+ 0x5090000050A,
+ 0x50B0000050C,
+ 0x50D0000050E,
+ 0x50F00000510,
+ 0x51100000512,
+ 0x51300000514,
+ 0x51500000516,
+ 0x51700000518,
+ 0x5190000051A,
+ 0x51B0000051C,
+ 0x51D0000051E,
+ 0x51F00000520,
+ 0x52100000522,
+ 0x52300000524,
+ 0x52500000526,
+ 0x52700000528,
+ 0x5290000052A,
+ 0x52B0000052C,
+ 0x52D0000052E,
+ 0x52F00000530,
+ 0x5590000055A,
+ 0x56000000587,
+ 0x58800000589,
+ 0x591000005BE,
+ 0x5BF000005C0,
+ 0x5C1000005C3,
+ 0x5C4000005C6,
+ 0x5C7000005C8,
+ 0x5D0000005EB,
+ 0x5EF000005F3,
+ 0x6100000061B,
+ 0x62000000640,
+ 0x64100000660,
+ 0x66E00000675,
+ 0x679000006D4,
+ 0x6D5000006DD,
+ 0x6DF000006E9,
+ 0x6EA000006F0,
+ 0x6FA00000700,
+ 0x7100000074B,
+ 0x74D000007B2,
+ 0x7C0000007F6,
+ 0x7FD000007FE,
+ 0x8000000082E,
+ 0x8400000085C,
+ 0x8600000086B,
+ 0x87000000888,
+ 0x8890000088F,
+ 0x898000008E2,
+ 0x8E300000958,
+ 0x96000000964,
+ 0x96600000970,
+ 0x97100000984,
+ 0x9850000098D,
+ 0x98F00000991,
+ 0x993000009A9,
+ 0x9AA000009B1,
+ 0x9B2000009B3,
+ 0x9B6000009BA,
+ 0x9BC000009C5,
+ 0x9C7000009C9,
+ 0x9CB000009CF,
+ 0x9D7000009D8,
+ 0x9E0000009E4,
+ 0x9E6000009F2,
+ 0x9FC000009FD,
+ 0x9FE000009FF,
+ 0xA0100000A04,
+ 0xA0500000A0B,
+ 0xA0F00000A11,
+ 0xA1300000A29,
+ 0xA2A00000A31,
+ 0xA3200000A33,
+ 0xA3500000A36,
+ 0xA3800000A3A,
+ 0xA3C00000A3D,
+ 0xA3E00000A43,
+ 0xA4700000A49,
+ 0xA4B00000A4E,
+ 0xA5100000A52,
+ 0xA5C00000A5D,
+ 0xA6600000A76,
+ 0xA8100000A84,
+ 0xA8500000A8E,
+ 0xA8F00000A92,
+ 0xA9300000AA9,
+ 0xAAA00000AB1,
+ 0xAB200000AB4,
+ 0xAB500000ABA,
+ 0xABC00000AC6,
+ 0xAC700000ACA,
+ 0xACB00000ACE,
+ 0xAD000000AD1,
+ 0xAE000000AE4,
+ 0xAE600000AF0,
+ 0xAF900000B00,
+ 0xB0100000B04,
+ 0xB0500000B0D,
+ 0xB0F00000B11,
+ 0xB1300000B29,
+ 0xB2A00000B31,
+ 0xB3200000B34,
+ 0xB3500000B3A,
+ 0xB3C00000B45,
+ 0xB4700000B49,
+ 0xB4B00000B4E,
+ 0xB5500000B58,
+ 0xB5F00000B64,
+ 0xB6600000B70,
+ 0xB7100000B72,
+ 0xB8200000B84,
+ 0xB8500000B8B,
+ 0xB8E00000B91,
+ 0xB9200000B96,
+ 0xB9900000B9B,
+ 0xB9C00000B9D,
+ 0xB9E00000BA0,
+ 0xBA300000BA5,
+ 0xBA800000BAB,
+ 0xBAE00000BBA,
+ 0xBBE00000BC3,
+ 0xBC600000BC9,
+ 0xBCA00000BCE,
+ 0xBD000000BD1,
+ 0xBD700000BD8,
+ 0xBE600000BF0,
+ 0xC0000000C0D,
+ 0xC0E00000C11,
+ 0xC1200000C29,
+ 0xC2A00000C3A,
+ 0xC3C00000C45,
+ 0xC4600000C49,
+ 0xC4A00000C4E,
+ 0xC5500000C57,
+ 0xC5800000C5B,
+ 0xC5D00000C5E,
+ 0xC6000000C64,
+ 0xC6600000C70,
+ 0xC8000000C84,
+ 0xC8500000C8D,
+ 0xC8E00000C91,
+ 0xC9200000CA9,
+ 0xCAA00000CB4,
+ 0xCB500000CBA,
+ 0xCBC00000CC5,
+ 0xCC600000CC9,
+ 0xCCA00000CCE,
+ 0xCD500000CD7,
+ 0xCDD00000CDF,
+ 0xCE000000CE4,
+ 0xCE600000CF0,
+ 0xCF100000CF4,
+ 0xD0000000D0D,
+ 0xD0E00000D11,
+ 0xD1200000D45,
+ 0xD4600000D49,
+ 0xD4A00000D4F,
+ 0xD5400000D58,
+ 0xD5F00000D64,
+ 0xD6600000D70,
+ 0xD7A00000D80,
+ 0xD8100000D84,
+ 0xD8500000D97,
+ 0xD9A00000DB2,
+ 0xDB300000DBC,
+ 0xDBD00000DBE,
+ 0xDC000000DC7,
+ 0xDCA00000DCB,
+ 0xDCF00000DD5,
+ 0xDD600000DD7,
+ 0xDD800000DE0,
+ 0xDE600000DF0,
+ 0xDF200000DF4,
+ 0xE0100000E33,
+ 0xE3400000E3B,
+ 0xE4000000E4F,
+ 0xE5000000E5A,
+ 0xE8100000E83,
+ 0xE8400000E85,
+ 0xE8600000E8B,
+ 0xE8C00000EA4,
+ 0xEA500000EA6,
+ 0xEA700000EB3,
+ 0xEB400000EBE,
+ 0xEC000000EC5,
+ 0xEC600000EC7,
+ 0xEC800000ECF,
+ 0xED000000EDA,
+ 0xEDE00000EE0,
+ 0xF0000000F01,
+ 0xF0B00000F0C,
+ 0xF1800000F1A,
+ 0xF2000000F2A,
+ 0xF3500000F36,
+ 0xF3700000F38,
+ 0xF3900000F3A,
+ 0xF3E00000F43,
+ 0xF4400000F48,
+ 0xF4900000F4D,
+ 0xF4E00000F52,
+ 0xF5300000F57,
+ 0xF5800000F5C,
+ 0xF5D00000F69,
+ 0xF6A00000F6D,
+ 0xF7100000F73,
+ 0xF7400000F75,
+ 0xF7A00000F81,
+ 0xF8200000F85,
+ 0xF8600000F93,
+ 0xF9400000F98,
+ 0xF9900000F9D,
+ 0xF9E00000FA2,
+ 0xFA300000FA7,
+ 0xFA800000FAC,
+ 0xFAD00000FB9,
+ 0xFBA00000FBD,
+ 0xFC600000FC7,
+ 0x10000000104A,
+ 0x10500000109E,
+ 0x10D0000010FB,
+ 0x10FD00001100,
+ 0x120000001249,
+ 0x124A0000124E,
+ 0x125000001257,
+ 0x125800001259,
+ 0x125A0000125E,
+ 0x126000001289,
+ 0x128A0000128E,
+ 0x1290000012B1,
+ 0x12B2000012B6,
+ 0x12B8000012BF,
+ 0x12C0000012C1,
+ 0x12C2000012C6,
+ 0x12C8000012D7,
+ 0x12D800001311,
+ 0x131200001316,
+ 0x13180000135B,
+ 0x135D00001360,
+ 0x138000001390,
+ 0x13A0000013F6,
+ 0x14010000166D,
+ 0x166F00001680,
+ 0x16810000169B,
+ 0x16A0000016EB,
+ 0x16F1000016F9,
+ 0x170000001716,
+ 0x171F00001735,
+ 0x174000001754,
+ 0x17600000176D,
+ 0x176E00001771,
+ 0x177200001774,
+ 0x1780000017B4,
+ 0x17B6000017D4,
+ 0x17D7000017D8,
+ 0x17DC000017DE,
+ 0x17E0000017EA,
+ 0x18100000181A,
+ 0x182000001879,
+ 0x1880000018AB,
+ 0x18B0000018F6,
+ 0x19000000191F,
+ 0x19200000192C,
+ 0x19300000193C,
+ 0x19460000196E,
+ 0x197000001975,
+ 0x1980000019AC,
+ 0x19B0000019CA,
+ 0x19D0000019DA,
+ 0x1A0000001A1C,
+ 0x1A2000001A5F,
+ 0x1A6000001A7D,
+ 0x1A7F00001A8A,
+ 0x1A9000001A9A,
+ 0x1AA700001AA8,
+ 0x1AB000001ABE,
+ 0x1ABF00001ACF,
+ 0x1B0000001B4D,
+ 0x1B5000001B5A,
+ 0x1B6B00001B74,
+ 0x1B8000001BF4,
+ 0x1C0000001C38,
+ 0x1C4000001C4A,
+ 0x1C4D00001C7E,
+ 0x1CD000001CD3,
+ 0x1CD400001CFB,
+ 0x1D0000001D2C,
+ 0x1D2F00001D30,
+ 0x1D3B00001D3C,
+ 0x1D4E00001D4F,
+ 0x1D6B00001D78,
+ 0x1D7900001D9B,
+ 0x1DC000001E00,
+ 0x1E0100001E02,
+ 0x1E0300001E04,
+ 0x1E0500001E06,
+ 0x1E0700001E08,
+ 0x1E0900001E0A,
+ 0x1E0B00001E0C,
+ 0x1E0D00001E0E,
+ 0x1E0F00001E10,
+ 0x1E1100001E12,
+ 0x1E1300001E14,
+ 0x1E1500001E16,
+ 0x1E1700001E18,
+ 0x1E1900001E1A,
+ 0x1E1B00001E1C,
+ 0x1E1D00001E1E,
+ 0x1E1F00001E20,
+ 0x1E2100001E22,
+ 0x1E2300001E24,
+ 0x1E2500001E26,
+ 0x1E2700001E28,
+ 0x1E2900001E2A,
+ 0x1E2B00001E2C,
+ 0x1E2D00001E2E,
+ 0x1E2F00001E30,
+ 0x1E3100001E32,
+ 0x1E3300001E34,
+ 0x1E3500001E36,
+ 0x1E3700001E38,
+ 0x1E3900001E3A,
+ 0x1E3B00001E3C,
+ 0x1E3D00001E3E,
+ 0x1E3F00001E40,
+ 0x1E4100001E42,
+ 0x1E4300001E44,
+ 0x1E4500001E46,
+ 0x1E4700001E48,
+ 0x1E4900001E4A,
+ 0x1E4B00001E4C,
+ 0x1E4D00001E4E,
+ 0x1E4F00001E50,
+ 0x1E5100001E52,
+ 0x1E5300001E54,
+ 0x1E5500001E56,
+ 0x1E5700001E58,
+ 0x1E5900001E5A,
+ 0x1E5B00001E5C,
+ 0x1E5D00001E5E,
+ 0x1E5F00001E60,
+ 0x1E6100001E62,
+ 0x1E6300001E64,
+ 0x1E6500001E66,
+ 0x1E6700001E68,
+ 0x1E6900001E6A,
+ 0x1E6B00001E6C,
+ 0x1E6D00001E6E,
+ 0x1E6F00001E70,
+ 0x1E7100001E72,
+ 0x1E7300001E74,
+ 0x1E7500001E76,
+ 0x1E7700001E78,
+ 0x1E7900001E7A,
+ 0x1E7B00001E7C,
+ 0x1E7D00001E7E,
+ 0x1E7F00001E80,
+ 0x1E8100001E82,
+ 0x1E8300001E84,
+ 0x1E8500001E86,
+ 0x1E8700001E88,
+ 0x1E8900001E8A,
+ 0x1E8B00001E8C,
+ 0x1E8D00001E8E,
+ 0x1E8F00001E90,
+ 0x1E9100001E92,
+ 0x1E9300001E94,
+ 0x1E9500001E9A,
+ 0x1E9C00001E9E,
+ 0x1E9F00001EA0,
+ 0x1EA100001EA2,
+ 0x1EA300001EA4,
+ 0x1EA500001EA6,
+ 0x1EA700001EA8,
+ 0x1EA900001EAA,
+ 0x1EAB00001EAC,
+ 0x1EAD00001EAE,
+ 0x1EAF00001EB0,
+ 0x1EB100001EB2,
+ 0x1EB300001EB4,
+ 0x1EB500001EB6,
+ 0x1EB700001EB8,
+ 0x1EB900001EBA,
+ 0x1EBB00001EBC,
+ 0x1EBD00001EBE,
+ 0x1EBF00001EC0,
+ 0x1EC100001EC2,
+ 0x1EC300001EC4,
+ 0x1EC500001EC6,
+ 0x1EC700001EC8,
+ 0x1EC900001ECA,
+ 0x1ECB00001ECC,
+ 0x1ECD00001ECE,
+ 0x1ECF00001ED0,
+ 0x1ED100001ED2,
+ 0x1ED300001ED4,
+ 0x1ED500001ED6,
+ 0x1ED700001ED8,
+ 0x1ED900001EDA,
+ 0x1EDB00001EDC,
+ 0x1EDD00001EDE,
+ 0x1EDF00001EE0,
+ 0x1EE100001EE2,
+ 0x1EE300001EE4,
+ 0x1EE500001EE6,
+ 0x1EE700001EE8,
+ 0x1EE900001EEA,
+ 0x1EEB00001EEC,
+ 0x1EED00001EEE,
+ 0x1EEF00001EF0,
+ 0x1EF100001EF2,
+ 0x1EF300001EF4,
+ 0x1EF500001EF6,
+ 0x1EF700001EF8,
+ 0x1EF900001EFA,
+ 0x1EFB00001EFC,
+ 0x1EFD00001EFE,
+ 0x1EFF00001F08,
+ 0x1F1000001F16,
+ 0x1F2000001F28,
+ 0x1F3000001F38,
+ 0x1F4000001F46,
+ 0x1F5000001F58,
+ 0x1F6000001F68,
+ 0x1F7000001F71,
+ 0x1F7200001F73,
+ 0x1F7400001F75,
+ 0x1F7600001F77,
+ 0x1F7800001F79,
+ 0x1F7A00001F7B,
+ 0x1F7C00001F7D,
+ 0x1FB000001FB2,
+ 0x1FB600001FB7,
+ 0x1FC600001FC7,
+ 0x1FD000001FD3,
+ 0x1FD600001FD8,
+ 0x1FE000001FE3,
+ 0x1FE400001FE8,
+ 0x1FF600001FF7,
+ 0x214E0000214F,
+ 0x218400002185,
+ 0x2C3000002C60,
+ 0x2C6100002C62,
+ 0x2C6500002C67,
+ 0x2C6800002C69,
+ 0x2C6A00002C6B,
+ 0x2C6C00002C6D,
+ 0x2C7100002C72,
+ 0x2C7300002C75,
+ 0x2C7600002C7C,
+ 0x2C8100002C82,
+ 0x2C8300002C84,
+ 0x2C8500002C86,
+ 0x2C8700002C88,
+ 0x2C8900002C8A,
+ 0x2C8B00002C8C,
+ 0x2C8D00002C8E,
+ 0x2C8F00002C90,
+ 0x2C9100002C92,
+ 0x2C9300002C94,
+ 0x2C9500002C96,
+ 0x2C9700002C98,
+ 0x2C9900002C9A,
+ 0x2C9B00002C9C,
+ 0x2C9D00002C9E,
+ 0x2C9F00002CA0,
+ 0x2CA100002CA2,
+ 0x2CA300002CA4,
+ 0x2CA500002CA6,
+ 0x2CA700002CA8,
+ 0x2CA900002CAA,
+ 0x2CAB00002CAC,
+ 0x2CAD00002CAE,
+ 0x2CAF00002CB0,
+ 0x2CB100002CB2,
+ 0x2CB300002CB4,
+ 0x2CB500002CB6,
+ 0x2CB700002CB8,
+ 0x2CB900002CBA,
+ 0x2CBB00002CBC,
+ 0x2CBD00002CBE,
+ 0x2CBF00002CC0,
+ 0x2CC100002CC2,
+ 0x2CC300002CC4,
+ 0x2CC500002CC6,
+ 0x2CC700002CC8,
+ 0x2CC900002CCA,
+ 0x2CCB00002CCC,
+ 0x2CCD00002CCE,
+ 0x2CCF00002CD0,
+ 0x2CD100002CD2,
+ 0x2CD300002CD4,
+ 0x2CD500002CD6,
+ 0x2CD700002CD8,
+ 0x2CD900002CDA,
+ 0x2CDB00002CDC,
+ 0x2CDD00002CDE,
+ 0x2CDF00002CE0,
+ 0x2CE100002CE2,
+ 0x2CE300002CE5,
+ 0x2CEC00002CED,
+ 0x2CEE00002CF2,
+ 0x2CF300002CF4,
+ 0x2D0000002D26,
+ 0x2D2700002D28,
+ 0x2D2D00002D2E,
+ 0x2D3000002D68,
+ 0x2D7F00002D97,
+ 0x2DA000002DA7,
+ 0x2DA800002DAF,
+ 0x2DB000002DB7,
+ 0x2DB800002DBF,
+ 0x2DC000002DC7,
+ 0x2DC800002DCF,
+ 0x2DD000002DD7,
+ 0x2DD800002DDF,
+ 0x2DE000002E00,
+ 0x2E2F00002E30,
+ 0x300500003008,
+ 0x302A0000302E,
+ 0x303C0000303D,
+ 0x304100003097,
+ 0x30990000309B,
+ 0x309D0000309F,
+ 0x30A1000030FB,
+ 0x30FC000030FF,
+ 0x310500003130,
+ 0x31A0000031C0,
+ 0x31F000003200,
+ 0x340000004DC0,
+ 0x4E000000A48D,
+ 0xA4D00000A4FE,
+ 0xA5000000A60D,
+ 0xA6100000A62C,
+ 0xA6410000A642,
+ 0xA6430000A644,
+ 0xA6450000A646,
+ 0xA6470000A648,
+ 0xA6490000A64A,
+ 0xA64B0000A64C,
+ 0xA64D0000A64E,
+ 0xA64F0000A650,
+ 0xA6510000A652,
+ 0xA6530000A654,
+ 0xA6550000A656,
+ 0xA6570000A658,
+ 0xA6590000A65A,
+ 0xA65B0000A65C,
+ 0xA65D0000A65E,
+ 0xA65F0000A660,
+ 0xA6610000A662,
+ 0xA6630000A664,
+ 0xA6650000A666,
+ 0xA6670000A668,
+ 0xA6690000A66A,
+ 0xA66B0000A66C,
+ 0xA66D0000A670,
+ 0xA6740000A67E,
+ 0xA67F0000A680,
+ 0xA6810000A682,
+ 0xA6830000A684,
+ 0xA6850000A686,
+ 0xA6870000A688,
+ 0xA6890000A68A,
+ 0xA68B0000A68C,
+ 0xA68D0000A68E,
+ 0xA68F0000A690,
+ 0xA6910000A692,
+ 0xA6930000A694,
+ 0xA6950000A696,
+ 0xA6970000A698,
+ 0xA6990000A69A,
+ 0xA69B0000A69C,
+ 0xA69E0000A6E6,
+ 0xA6F00000A6F2,
+ 0xA7170000A720,
+ 0xA7230000A724,
+ 0xA7250000A726,
+ 0xA7270000A728,
+ 0xA7290000A72A,
+ 0xA72B0000A72C,
+ 0xA72D0000A72E,
+ 0xA72F0000A732,
+ 0xA7330000A734,
+ 0xA7350000A736,
+ 0xA7370000A738,
+ 0xA7390000A73A,
+ 0xA73B0000A73C,
+ 0xA73D0000A73E,
+ 0xA73F0000A740,
+ 0xA7410000A742,
+ 0xA7430000A744,
+ 0xA7450000A746,
+ 0xA7470000A748,
+ 0xA7490000A74A,
+ 0xA74B0000A74C,
+ 0xA74D0000A74E,
+ 0xA74F0000A750,
+ 0xA7510000A752,
+ 0xA7530000A754,
+ 0xA7550000A756,
+ 0xA7570000A758,
+ 0xA7590000A75A,
+ 0xA75B0000A75C,
+ 0xA75D0000A75E,
+ 0xA75F0000A760,
+ 0xA7610000A762,
+ 0xA7630000A764,
+ 0xA7650000A766,
+ 0xA7670000A768,
+ 0xA7690000A76A,
+ 0xA76B0000A76C,
+ 0xA76D0000A76E,
+ 0xA76F0000A770,
+ 0xA7710000A779,
+ 0xA77A0000A77B,
+ 0xA77C0000A77D,
+ 0xA77F0000A780,
+ 0xA7810000A782,
+ 0xA7830000A784,
+ 0xA7850000A786,
+ 0xA7870000A789,
+ 0xA78C0000A78D,
+ 0xA78E0000A790,
+ 0xA7910000A792,
+ 0xA7930000A796,
+ 0xA7970000A798,
+ 0xA7990000A79A,
+ 0xA79B0000A79C,
+ 0xA79D0000A79E,
+ 0xA79F0000A7A0,
+ 0xA7A10000A7A2,
+ 0xA7A30000A7A4,
+ 0xA7A50000A7A6,
+ 0xA7A70000A7A8,
+ 0xA7A90000A7AA,
+ 0xA7AF0000A7B0,
+ 0xA7B50000A7B6,
+ 0xA7B70000A7B8,
+ 0xA7B90000A7BA,
+ 0xA7BB0000A7BC,
+ 0xA7BD0000A7BE,
+ 0xA7BF0000A7C0,
+ 0xA7C10000A7C2,
+ 0xA7C30000A7C4,
+ 0xA7C80000A7C9,
+ 0xA7CA0000A7CB,
+ 0xA7D10000A7D2,
+ 0xA7D30000A7D4,
+ 0xA7D50000A7D6,
+ 0xA7D70000A7D8,
+ 0xA7D90000A7DA,
+ 0xA7F60000A7F8,
+ 0xA7FA0000A828,
+ 0xA82C0000A82D,
+ 0xA8400000A874,
+ 0xA8800000A8C6,
+ 0xA8D00000A8DA,
+ 0xA8E00000A8F8,
+ 0xA8FB0000A8FC,
+ 0xA8FD0000A92E,
+ 0xA9300000A954,
+ 0xA9800000A9C1,
+ 0xA9CF0000A9DA,
+ 0xA9E00000A9FF,
+ 0xAA000000AA37,
+ 0xAA400000AA4E,
+ 0xAA500000AA5A,
+ 0xAA600000AA77,
+ 0xAA7A0000AAC3,
+ 0xAADB0000AADE,
+ 0xAAE00000AAF0,
+ 0xAAF20000AAF7,
+ 0xAB010000AB07,
+ 0xAB090000AB0F,
+ 0xAB110000AB17,
+ 0xAB200000AB27,
+ 0xAB280000AB2F,
+ 0xAB300000AB5B,
+ 0xAB600000AB69,
+ 0xABC00000ABEB,
+ 0xABEC0000ABEE,
+ 0xABF00000ABFA,
+ 0xAC000000D7A4,
+ 0xFA0E0000FA10,
+ 0xFA110000FA12,
+ 0xFA130000FA15,
+ 0xFA1F0000FA20,
+ 0xFA210000FA22,
+ 0xFA230000FA25,
+ 0xFA270000FA2A,
+ 0xFB1E0000FB1F,
+ 0xFE200000FE30,
+ 0xFE730000FE74,
+ 0x100000001000C,
+ 0x1000D00010027,
+ 0x100280001003B,
+ 0x1003C0001003E,
+ 0x1003F0001004E,
+ 0x100500001005E,
+ 0x10080000100FB,
+ 0x101FD000101FE,
+ 0x102800001029D,
+ 0x102A0000102D1,
+ 0x102E0000102E1,
+ 0x1030000010320,
+ 0x1032D00010341,
+ 0x103420001034A,
+ 0x103500001037B,
+ 0x103800001039E,
+ 0x103A0000103C4,
+ 0x103C8000103D0,
+ 0x104280001049E,
+ 0x104A0000104AA,
+ 0x104D8000104FC,
+ 0x1050000010528,
+ 0x1053000010564,
+ 0x10597000105A2,
+ 0x105A3000105B2,
+ 0x105B3000105BA,
+ 0x105BB000105BD,
+ 0x1060000010737,
+ 0x1074000010756,
+ 0x1076000010768,
+ 0x1078000010781,
+ 0x1080000010806,
+ 0x1080800010809,
+ 0x1080A00010836,
+ 0x1083700010839,
+ 0x1083C0001083D,
+ 0x1083F00010856,
+ 0x1086000010877,
+ 0x108800001089F,
+ 0x108E0000108F3,
+ 0x108F4000108F6,
+ 0x1090000010916,
+ 0x109200001093A,
+ 0x10980000109B8,
+ 0x109BE000109C0,
+ 0x10A0000010A04,
+ 0x10A0500010A07,
+ 0x10A0C00010A14,
+ 0x10A1500010A18,
+ 0x10A1900010A36,
+ 0x10A3800010A3B,
+ 0x10A3F00010A40,
+ 0x10A6000010A7D,
+ 0x10A8000010A9D,
+ 0x10AC000010AC8,
+ 0x10AC900010AE7,
+ 0x10B0000010B36,
+ 0x10B4000010B56,
+ 0x10B6000010B73,
+ 0x10B8000010B92,
+ 0x10C0000010C49,
+ 0x10CC000010CF3,
+ 0x10D0000010D28,
+ 0x10D3000010D3A,
+ 0x10E8000010EAA,
+ 0x10EAB00010EAD,
+ 0x10EB000010EB2,
+ 0x10EFD00010F1D,
+ 0x10F2700010F28,
+ 0x10F3000010F51,
+ 0x10F7000010F86,
+ 0x10FB000010FC5,
+ 0x10FE000010FF7,
+ 0x1100000011047,
+ 0x1106600011076,
+ 0x1107F000110BB,
+ 0x110C2000110C3,
+ 0x110D0000110E9,
+ 0x110F0000110FA,
+ 0x1110000011135,
+ 0x1113600011140,
+ 0x1114400011148,
+ 0x1115000011174,
+ 0x1117600011177,
+ 0x11180000111C5,
+ 0x111C9000111CD,
+ 0x111CE000111DB,
+ 0x111DC000111DD,
+ 0x1120000011212,
+ 0x1121300011238,
+ 0x1123E00011242,
+ 0x1128000011287,
+ 0x1128800011289,
+ 0x1128A0001128E,
+ 0x1128F0001129E,
+ 0x1129F000112A9,
+ 0x112B0000112EB,
+ 0x112F0000112FA,
+ 0x1130000011304,
+ 0x113050001130D,
+ 0x1130F00011311,
+ 0x1131300011329,
+ 0x1132A00011331,
+ 0x1133200011334,
+ 0x113350001133A,
+ 0x1133B00011345,
+ 0x1134700011349,
+ 0x1134B0001134E,
+ 0x1135000011351,
+ 0x1135700011358,
+ 0x1135D00011364,
+ 0x113660001136D,
+ 0x1137000011375,
+ 0x114000001144B,
+ 0x114500001145A,
+ 0x1145E00011462,
+ 0x11480000114C6,
+ 0x114C7000114C8,
+ 0x114D0000114DA,
+ 0x11580000115B6,
+ 0x115B8000115C1,
+ 0x115D8000115DE,
+ 0x1160000011641,
+ 0x1164400011645,
+ 0x116500001165A,
+ 0x11680000116B9,
+ 0x116C0000116CA,
+ 0x117000001171B,
+ 0x1171D0001172C,
+ 0x117300001173A,
+ 0x1174000011747,
+ 0x118000001183B,
+ 0x118C0000118EA,
+ 0x118FF00011907,
+ 0x119090001190A,
+ 0x1190C00011914,
+ 0x1191500011917,
+ 0x1191800011936,
+ 0x1193700011939,
+ 0x1193B00011944,
+ 0x119500001195A,
+ 0x119A0000119A8,
+ 0x119AA000119D8,
+ 0x119DA000119E2,
+ 0x119E3000119E5,
+ 0x11A0000011A3F,
+ 0x11A4700011A48,
+ 0x11A5000011A9A,
+ 0x11A9D00011A9E,
+ 0x11AB000011AF9,
+ 0x11C0000011C09,
+ 0x11C0A00011C37,
+ 0x11C3800011C41,
+ 0x11C5000011C5A,
+ 0x11C7200011C90,
+ 0x11C9200011CA8,
+ 0x11CA900011CB7,
+ 0x11D0000011D07,
+ 0x11D0800011D0A,
+ 0x11D0B00011D37,
+ 0x11D3A00011D3B,
+ 0x11D3C00011D3E,
+ 0x11D3F00011D48,
+ 0x11D5000011D5A,
+ 0x11D6000011D66,
+ 0x11D6700011D69,
+ 0x11D6A00011D8F,
+ 0x11D9000011D92,
+ 0x11D9300011D99,
+ 0x11DA000011DAA,
+ 0x11EE000011EF7,
+ 0x11F0000011F11,
+ 0x11F1200011F3B,
+ 0x11F3E00011F43,
+ 0x11F5000011F5A,
+ 0x11FB000011FB1,
+ 0x120000001239A,
+ 0x1248000012544,
+ 0x12F9000012FF1,
+ 0x1300000013430,
+ 0x1344000013456,
+ 0x1440000014647,
+ 0x1680000016A39,
+ 0x16A4000016A5F,
+ 0x16A6000016A6A,
+ 0x16A7000016ABF,
+ 0x16AC000016ACA,
+ 0x16AD000016AEE,
+ 0x16AF000016AF5,
+ 0x16B0000016B37,
+ 0x16B4000016B44,
+ 0x16B5000016B5A,
+ 0x16B6300016B78,
+ 0x16B7D00016B90,
+ 0x16E6000016E80,
+ 0x16F0000016F4B,
+ 0x16F4F00016F88,
+ 0x16F8F00016FA0,
+ 0x16FE000016FE2,
+ 0x16FE300016FE5,
+ 0x16FF000016FF2,
+ 0x17000000187F8,
+ 0x1880000018CD6,
+ 0x18D0000018D09,
+ 0x1AFF00001AFF4,
+ 0x1AFF50001AFFC,
+ 0x1AFFD0001AFFF,
+ 0x1B0000001B123,
+ 0x1B1320001B133,
+ 0x1B1500001B153,
+ 0x1B1550001B156,
+ 0x1B1640001B168,
+ 0x1B1700001B2FC,
+ 0x1BC000001BC6B,
+ 0x1BC700001BC7D,
+ 0x1BC800001BC89,
+ 0x1BC900001BC9A,
+ 0x1BC9D0001BC9F,
+ 0x1CF000001CF2E,
+ 0x1CF300001CF47,
+ 0x1DA000001DA37,
+ 0x1DA3B0001DA6D,
+ 0x1DA750001DA76,
+ 0x1DA840001DA85,
+ 0x1DA9B0001DAA0,
+ 0x1DAA10001DAB0,
+ 0x1DF000001DF1F,
+ 0x1DF250001DF2B,
+ 0x1E0000001E007,
+ 0x1E0080001E019,
+ 0x1E01B0001E022,
+ 0x1E0230001E025,
+ 0x1E0260001E02B,
+ 0x1E08F0001E090,
+ 0x1E1000001E12D,
+ 0x1E1300001E13E,
+ 0x1E1400001E14A,
+ 0x1E14E0001E14F,
+ 0x1E2900001E2AF,
+ 0x1E2C00001E2FA,
+ 0x1E4D00001E4FA,
+ 0x1E7E00001E7E7,
+ 0x1E7E80001E7EC,
+ 0x1E7ED0001E7EF,
+ 0x1E7F00001E7FF,
+ 0x1E8000001E8C5,
+ 0x1E8D00001E8D7,
+ 0x1E9220001E94C,
+ 0x1E9500001E95A,
+ 0x200000002A6E0,
+ 0x2A7000002B73A,
+ 0x2B7400002B81E,
+ 0x2B8200002CEA2,
+ 0x2CEB00002EBE1,
+ 0x2EBF00002EE5E,
+ 0x300000003134B,
+ 0x31350000323B0,
+ ),
+ "CONTEXTJ": (0x200C0000200E,),
+ "CONTEXTO": (
+ 0xB7000000B8,
+ 0x37500000376,
+ 0x5F3000005F5,
+ 0x6600000066A,
+ 0x6F0000006FA,
+ 0x30FB000030FC,
+ ),
+}
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/intranges.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/intranges.py
new file mode 100644
index 0000000..7bfaa8d
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/intranges.py
@@ -0,0 +1,57 @@
+"""
+Given a list of integers, made up of (hopefully) a small number of long runs
+of consecutive integers, compute a representation of the form
+((start1, end1), (start2, end2) ...). Then answer the question "was x present
+in the original list?" in time O(log(# runs)).
+"""
+
+import bisect
+from typing import List, Tuple
+
+
+def intranges_from_list(list_: List[int]) -> Tuple[int, ...]:
+ """Represent a list of integers as a sequence of ranges:
+ ((start_0, end_0), (start_1, end_1), ...), such that the original
+ integers are exactly those x such that start_i <= x < end_i for some i.
+
+ Ranges are encoded as single integers (start << 32 | end), not as tuples.
+ """
+
+ sorted_list = sorted(list_)
+ ranges = []
+ last_write = -1
+ for i in range(len(sorted_list)):
+ if i + 1 < len(sorted_list):
+ if sorted_list[i] == sorted_list[i + 1] - 1:
+ continue
+ current_range = sorted_list[last_write + 1 : i + 1]
+ ranges.append(_encode_range(current_range[0], current_range[-1] + 1))
+ last_write = i
+
+ return tuple(ranges)
+
+
+def _encode_range(start: int, end: int) -> int:
+ return (start << 32) | end
+
+
+def _decode_range(r: int) -> Tuple[int, int]:
+ return (r >> 32), (r & ((1 << 32) - 1))
+
+
+def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool:
+ """Determine if `int_` falls into one of the ranges in `ranges`."""
+ tuple_ = _encode_range(int_, 0)
+ pos = bisect.bisect_left(ranges, tuple_)
+ # we could be immediately ahead of a tuple (start, end)
+ # with start < int_ <= end
+ if pos > 0:
+ left, right = _decode_range(ranges[pos - 1])
+ if left <= int_ < right:
+ return True
+ # or we could be immediately behind a tuple (int_, end)
+ if pos < len(ranges):
+ left, _ = _decode_range(ranges[pos])
+ if left == int_:
+ return True
+ return False
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/package_data.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/package_data.py
new file mode 100644
index 0000000..514ff7e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/package_data.py
@@ -0,0 +1 @@
+__version__ = "3.10"
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/idna/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/idna/uts46data.py b/stripe_config/.venv/lib/python3.12/site-packages/idna/uts46data.py
new file mode 100644
index 0000000..eb89432
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/idna/uts46data.py
@@ -0,0 +1,8681 @@
+# This file is automatically generated by tools/idna-data
+# vim: set fileencoding=utf-8 :
+
+from typing import List, Tuple, Union
+
+"""IDNA Mapping Table from UTS46."""
+
+
+__version__ = "15.1.0"
+
+
+def _seg_0() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x0, "3"),
+ (0x1, "3"),
+ (0x2, "3"),
+ (0x3, "3"),
+ (0x4, "3"),
+ (0x5, "3"),
+ (0x6, "3"),
+ (0x7, "3"),
+ (0x8, "3"),
+ (0x9, "3"),
+ (0xA, "3"),
+ (0xB, "3"),
+ (0xC, "3"),
+ (0xD, "3"),
+ (0xE, "3"),
+ (0xF, "3"),
+ (0x10, "3"),
+ (0x11, "3"),
+ (0x12, "3"),
+ (0x13, "3"),
+ (0x14, "3"),
+ (0x15, "3"),
+ (0x16, "3"),
+ (0x17, "3"),
+ (0x18, "3"),
+ (0x19, "3"),
+ (0x1A, "3"),
+ (0x1B, "3"),
+ (0x1C, "3"),
+ (0x1D, "3"),
+ (0x1E, "3"),
+ (0x1F, "3"),
+ (0x20, "3"),
+ (0x21, "3"),
+ (0x22, "3"),
+ (0x23, "3"),
+ (0x24, "3"),
+ (0x25, "3"),
+ (0x26, "3"),
+ (0x27, "3"),
+ (0x28, "3"),
+ (0x29, "3"),
+ (0x2A, "3"),
+ (0x2B, "3"),
+ (0x2C, "3"),
+ (0x2D, "V"),
+ (0x2E, "V"),
+ (0x2F, "3"),
+ (0x30, "V"),
+ (0x31, "V"),
+ (0x32, "V"),
+ (0x33, "V"),
+ (0x34, "V"),
+ (0x35, "V"),
+ (0x36, "V"),
+ (0x37, "V"),
+ (0x38, "V"),
+ (0x39, "V"),
+ (0x3A, "3"),
+ (0x3B, "3"),
+ (0x3C, "3"),
+ (0x3D, "3"),
+ (0x3E, "3"),
+ (0x3F, "3"),
+ (0x40, "3"),
+ (0x41, "M", "a"),
+ (0x42, "M", "b"),
+ (0x43, "M", "c"),
+ (0x44, "M", "d"),
+ (0x45, "M", "e"),
+ (0x46, "M", "f"),
+ (0x47, "M", "g"),
+ (0x48, "M", "h"),
+ (0x49, "M", "i"),
+ (0x4A, "M", "j"),
+ (0x4B, "M", "k"),
+ (0x4C, "M", "l"),
+ (0x4D, "M", "m"),
+ (0x4E, "M", "n"),
+ (0x4F, "M", "o"),
+ (0x50, "M", "p"),
+ (0x51, "M", "q"),
+ (0x52, "M", "r"),
+ (0x53, "M", "s"),
+ (0x54, "M", "t"),
+ (0x55, "M", "u"),
+ (0x56, "M", "v"),
+ (0x57, "M", "w"),
+ (0x58, "M", "x"),
+ (0x59, "M", "y"),
+ (0x5A, "M", "z"),
+ (0x5B, "3"),
+ (0x5C, "3"),
+ (0x5D, "3"),
+ (0x5E, "3"),
+ (0x5F, "3"),
+ (0x60, "3"),
+ (0x61, "V"),
+ (0x62, "V"),
+ (0x63, "V"),
+ ]
+
+
+def _seg_1() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x64, "V"),
+ (0x65, "V"),
+ (0x66, "V"),
+ (0x67, "V"),
+ (0x68, "V"),
+ (0x69, "V"),
+ (0x6A, "V"),
+ (0x6B, "V"),
+ (0x6C, "V"),
+ (0x6D, "V"),
+ (0x6E, "V"),
+ (0x6F, "V"),
+ (0x70, "V"),
+ (0x71, "V"),
+ (0x72, "V"),
+ (0x73, "V"),
+ (0x74, "V"),
+ (0x75, "V"),
+ (0x76, "V"),
+ (0x77, "V"),
+ (0x78, "V"),
+ (0x79, "V"),
+ (0x7A, "V"),
+ (0x7B, "3"),
+ (0x7C, "3"),
+ (0x7D, "3"),
+ (0x7E, "3"),
+ (0x7F, "3"),
+ (0x80, "X"),
+ (0x81, "X"),
+ (0x82, "X"),
+ (0x83, "X"),
+ (0x84, "X"),
+ (0x85, "X"),
+ (0x86, "X"),
+ (0x87, "X"),
+ (0x88, "X"),
+ (0x89, "X"),
+ (0x8A, "X"),
+ (0x8B, "X"),
+ (0x8C, "X"),
+ (0x8D, "X"),
+ (0x8E, "X"),
+ (0x8F, "X"),
+ (0x90, "X"),
+ (0x91, "X"),
+ (0x92, "X"),
+ (0x93, "X"),
+ (0x94, "X"),
+ (0x95, "X"),
+ (0x96, "X"),
+ (0x97, "X"),
+ (0x98, "X"),
+ (0x99, "X"),
+ (0x9A, "X"),
+ (0x9B, "X"),
+ (0x9C, "X"),
+ (0x9D, "X"),
+ (0x9E, "X"),
+ (0x9F, "X"),
+ (0xA0, "3", " "),
+ (0xA1, "V"),
+ (0xA2, "V"),
+ (0xA3, "V"),
+ (0xA4, "V"),
+ (0xA5, "V"),
+ (0xA6, "V"),
+ (0xA7, "V"),
+ (0xA8, "3", " ̈"),
+ (0xA9, "V"),
+ (0xAA, "M", "a"),
+ (0xAB, "V"),
+ (0xAC, "V"),
+ (0xAD, "I"),
+ (0xAE, "V"),
+ (0xAF, "3", " ̄"),
+ (0xB0, "V"),
+ (0xB1, "V"),
+ (0xB2, "M", "2"),
+ (0xB3, "M", "3"),
+ (0xB4, "3", " ́"),
+ (0xB5, "M", "μ"),
+ (0xB6, "V"),
+ (0xB7, "V"),
+ (0xB8, "3", " ̧"),
+ (0xB9, "M", "1"),
+ (0xBA, "M", "o"),
+ (0xBB, "V"),
+ (0xBC, "M", "1⁄4"),
+ (0xBD, "M", "1⁄2"),
+ (0xBE, "M", "3⁄4"),
+ (0xBF, "V"),
+ (0xC0, "M", "à"),
+ (0xC1, "M", "á"),
+ (0xC2, "M", "â"),
+ (0xC3, "M", "ã"),
+ (0xC4, "M", "ä"),
+ (0xC5, "M", "å"),
+ (0xC6, "M", "æ"),
+ (0xC7, "M", "ç"),
+ ]
+
+
+def _seg_2() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xC8, "M", "è"),
+ (0xC9, "M", "é"),
+ (0xCA, "M", "ê"),
+ (0xCB, "M", "ë"),
+ (0xCC, "M", "ì"),
+ (0xCD, "M", "í"),
+ (0xCE, "M", "î"),
+ (0xCF, "M", "ï"),
+ (0xD0, "M", "ð"),
+ (0xD1, "M", "ñ"),
+ (0xD2, "M", "ò"),
+ (0xD3, "M", "ó"),
+ (0xD4, "M", "ô"),
+ (0xD5, "M", "õ"),
+ (0xD6, "M", "ö"),
+ (0xD7, "V"),
+ (0xD8, "M", "ø"),
+ (0xD9, "M", "ù"),
+ (0xDA, "M", "ú"),
+ (0xDB, "M", "û"),
+ (0xDC, "M", "ü"),
+ (0xDD, "M", "ý"),
+ (0xDE, "M", "þ"),
+ (0xDF, "D", "ss"),
+ (0xE0, "V"),
+ (0xE1, "V"),
+ (0xE2, "V"),
+ (0xE3, "V"),
+ (0xE4, "V"),
+ (0xE5, "V"),
+ (0xE6, "V"),
+ (0xE7, "V"),
+ (0xE8, "V"),
+ (0xE9, "V"),
+ (0xEA, "V"),
+ (0xEB, "V"),
+ (0xEC, "V"),
+ (0xED, "V"),
+ (0xEE, "V"),
+ (0xEF, "V"),
+ (0xF0, "V"),
+ (0xF1, "V"),
+ (0xF2, "V"),
+ (0xF3, "V"),
+ (0xF4, "V"),
+ (0xF5, "V"),
+ (0xF6, "V"),
+ (0xF7, "V"),
+ (0xF8, "V"),
+ (0xF9, "V"),
+ (0xFA, "V"),
+ (0xFB, "V"),
+ (0xFC, "V"),
+ (0xFD, "V"),
+ (0xFE, "V"),
+ (0xFF, "V"),
+ (0x100, "M", "ā"),
+ (0x101, "V"),
+ (0x102, "M", "ă"),
+ (0x103, "V"),
+ (0x104, "M", "ą"),
+ (0x105, "V"),
+ (0x106, "M", "ć"),
+ (0x107, "V"),
+ (0x108, "M", "ĉ"),
+ (0x109, "V"),
+ (0x10A, "M", "ċ"),
+ (0x10B, "V"),
+ (0x10C, "M", "č"),
+ (0x10D, "V"),
+ (0x10E, "M", "ď"),
+ (0x10F, "V"),
+ (0x110, "M", "đ"),
+ (0x111, "V"),
+ (0x112, "M", "ē"),
+ (0x113, "V"),
+ (0x114, "M", "ĕ"),
+ (0x115, "V"),
+ (0x116, "M", "ė"),
+ (0x117, "V"),
+ (0x118, "M", "ę"),
+ (0x119, "V"),
+ (0x11A, "M", "ě"),
+ (0x11B, "V"),
+ (0x11C, "M", "ĝ"),
+ (0x11D, "V"),
+ (0x11E, "M", "ğ"),
+ (0x11F, "V"),
+ (0x120, "M", "ġ"),
+ (0x121, "V"),
+ (0x122, "M", "ģ"),
+ (0x123, "V"),
+ (0x124, "M", "ĥ"),
+ (0x125, "V"),
+ (0x126, "M", "ħ"),
+ (0x127, "V"),
+ (0x128, "M", "ĩ"),
+ (0x129, "V"),
+ (0x12A, "M", "ī"),
+ (0x12B, "V"),
+ ]
+
+
+def _seg_3() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x12C, "M", "ĭ"),
+ (0x12D, "V"),
+ (0x12E, "M", "į"),
+ (0x12F, "V"),
+ (0x130, "M", "i̇"),
+ (0x131, "V"),
+ (0x132, "M", "ij"),
+ (0x134, "M", "ĵ"),
+ (0x135, "V"),
+ (0x136, "M", "ķ"),
+ (0x137, "V"),
+ (0x139, "M", "ĺ"),
+ (0x13A, "V"),
+ (0x13B, "M", "ļ"),
+ (0x13C, "V"),
+ (0x13D, "M", "ľ"),
+ (0x13E, "V"),
+ (0x13F, "M", "l·"),
+ (0x141, "M", "ł"),
+ (0x142, "V"),
+ (0x143, "M", "ń"),
+ (0x144, "V"),
+ (0x145, "M", "ņ"),
+ (0x146, "V"),
+ (0x147, "M", "ň"),
+ (0x148, "V"),
+ (0x149, "M", "ʼn"),
+ (0x14A, "M", "ŋ"),
+ (0x14B, "V"),
+ (0x14C, "M", "ō"),
+ (0x14D, "V"),
+ (0x14E, "M", "ŏ"),
+ (0x14F, "V"),
+ (0x150, "M", "ő"),
+ (0x151, "V"),
+ (0x152, "M", "œ"),
+ (0x153, "V"),
+ (0x154, "M", "ŕ"),
+ (0x155, "V"),
+ (0x156, "M", "ŗ"),
+ (0x157, "V"),
+ (0x158, "M", "ř"),
+ (0x159, "V"),
+ (0x15A, "M", "ś"),
+ (0x15B, "V"),
+ (0x15C, "M", "ŝ"),
+ (0x15D, "V"),
+ (0x15E, "M", "ş"),
+ (0x15F, "V"),
+ (0x160, "M", "š"),
+ (0x161, "V"),
+ (0x162, "M", "ţ"),
+ (0x163, "V"),
+ (0x164, "M", "ť"),
+ (0x165, "V"),
+ (0x166, "M", "ŧ"),
+ (0x167, "V"),
+ (0x168, "M", "ũ"),
+ (0x169, "V"),
+ (0x16A, "M", "ū"),
+ (0x16B, "V"),
+ (0x16C, "M", "ŭ"),
+ (0x16D, "V"),
+ (0x16E, "M", "ů"),
+ (0x16F, "V"),
+ (0x170, "M", "ű"),
+ (0x171, "V"),
+ (0x172, "M", "ų"),
+ (0x173, "V"),
+ (0x174, "M", "ŵ"),
+ (0x175, "V"),
+ (0x176, "M", "ŷ"),
+ (0x177, "V"),
+ (0x178, "M", "ÿ"),
+ (0x179, "M", "ź"),
+ (0x17A, "V"),
+ (0x17B, "M", "ż"),
+ (0x17C, "V"),
+ (0x17D, "M", "ž"),
+ (0x17E, "V"),
+ (0x17F, "M", "s"),
+ (0x180, "V"),
+ (0x181, "M", "ɓ"),
+ (0x182, "M", "ƃ"),
+ (0x183, "V"),
+ (0x184, "M", "ƅ"),
+ (0x185, "V"),
+ (0x186, "M", "ɔ"),
+ (0x187, "M", "ƈ"),
+ (0x188, "V"),
+ (0x189, "M", "ɖ"),
+ (0x18A, "M", "ɗ"),
+ (0x18B, "M", "ƌ"),
+ (0x18C, "V"),
+ (0x18E, "M", "ǝ"),
+ (0x18F, "M", "ə"),
+ (0x190, "M", "ɛ"),
+ (0x191, "M", "ƒ"),
+ (0x192, "V"),
+ (0x193, "M", "ɠ"),
+ ]
+
+
+def _seg_4() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x194, "M", "ɣ"),
+ (0x195, "V"),
+ (0x196, "M", "ɩ"),
+ (0x197, "M", "ɨ"),
+ (0x198, "M", "ƙ"),
+ (0x199, "V"),
+ (0x19C, "M", "ɯ"),
+ (0x19D, "M", "ɲ"),
+ (0x19E, "V"),
+ (0x19F, "M", "ɵ"),
+ (0x1A0, "M", "ơ"),
+ (0x1A1, "V"),
+ (0x1A2, "M", "ƣ"),
+ (0x1A3, "V"),
+ (0x1A4, "M", "ƥ"),
+ (0x1A5, "V"),
+ (0x1A6, "M", "ʀ"),
+ (0x1A7, "M", "ƨ"),
+ (0x1A8, "V"),
+ (0x1A9, "M", "ʃ"),
+ (0x1AA, "V"),
+ (0x1AC, "M", "ƭ"),
+ (0x1AD, "V"),
+ (0x1AE, "M", "ʈ"),
+ (0x1AF, "M", "ư"),
+ (0x1B0, "V"),
+ (0x1B1, "M", "ʊ"),
+ (0x1B2, "M", "ʋ"),
+ (0x1B3, "M", "ƴ"),
+ (0x1B4, "V"),
+ (0x1B5, "M", "ƶ"),
+ (0x1B6, "V"),
+ (0x1B7, "M", "ʒ"),
+ (0x1B8, "M", "ƹ"),
+ (0x1B9, "V"),
+ (0x1BC, "M", "ƽ"),
+ (0x1BD, "V"),
+ (0x1C4, "M", "dž"),
+ (0x1C7, "M", "lj"),
+ (0x1CA, "M", "nj"),
+ (0x1CD, "M", "ǎ"),
+ (0x1CE, "V"),
+ (0x1CF, "M", "ǐ"),
+ (0x1D0, "V"),
+ (0x1D1, "M", "ǒ"),
+ (0x1D2, "V"),
+ (0x1D3, "M", "ǔ"),
+ (0x1D4, "V"),
+ (0x1D5, "M", "ǖ"),
+ (0x1D6, "V"),
+ (0x1D7, "M", "ǘ"),
+ (0x1D8, "V"),
+ (0x1D9, "M", "ǚ"),
+ (0x1DA, "V"),
+ (0x1DB, "M", "ǜ"),
+ (0x1DC, "V"),
+ (0x1DE, "M", "ǟ"),
+ (0x1DF, "V"),
+ (0x1E0, "M", "ǡ"),
+ (0x1E1, "V"),
+ (0x1E2, "M", "ǣ"),
+ (0x1E3, "V"),
+ (0x1E4, "M", "ǥ"),
+ (0x1E5, "V"),
+ (0x1E6, "M", "ǧ"),
+ (0x1E7, "V"),
+ (0x1E8, "M", "ǩ"),
+ (0x1E9, "V"),
+ (0x1EA, "M", "ǫ"),
+ (0x1EB, "V"),
+ (0x1EC, "M", "ǭ"),
+ (0x1ED, "V"),
+ (0x1EE, "M", "ǯ"),
+ (0x1EF, "V"),
+ (0x1F1, "M", "dz"),
+ (0x1F4, "M", "ǵ"),
+ (0x1F5, "V"),
+ (0x1F6, "M", "ƕ"),
+ (0x1F7, "M", "ƿ"),
+ (0x1F8, "M", "ǹ"),
+ (0x1F9, "V"),
+ (0x1FA, "M", "ǻ"),
+ (0x1FB, "V"),
+ (0x1FC, "M", "ǽ"),
+ (0x1FD, "V"),
+ (0x1FE, "M", "ǿ"),
+ (0x1FF, "V"),
+ (0x200, "M", "ȁ"),
+ (0x201, "V"),
+ (0x202, "M", "ȃ"),
+ (0x203, "V"),
+ (0x204, "M", "ȅ"),
+ (0x205, "V"),
+ (0x206, "M", "ȇ"),
+ (0x207, "V"),
+ (0x208, "M", "ȉ"),
+ (0x209, "V"),
+ (0x20A, "M", "ȋ"),
+ (0x20B, "V"),
+ (0x20C, "M", "ȍ"),
+ ]
+
+
+def _seg_5() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x20D, "V"),
+ (0x20E, "M", "ȏ"),
+ (0x20F, "V"),
+ (0x210, "M", "ȑ"),
+ (0x211, "V"),
+ (0x212, "M", "ȓ"),
+ (0x213, "V"),
+ (0x214, "M", "ȕ"),
+ (0x215, "V"),
+ (0x216, "M", "ȗ"),
+ (0x217, "V"),
+ (0x218, "M", "ș"),
+ (0x219, "V"),
+ (0x21A, "M", "ț"),
+ (0x21B, "V"),
+ (0x21C, "M", "ȝ"),
+ (0x21D, "V"),
+ (0x21E, "M", "ȟ"),
+ (0x21F, "V"),
+ (0x220, "M", "ƞ"),
+ (0x221, "V"),
+ (0x222, "M", "ȣ"),
+ (0x223, "V"),
+ (0x224, "M", "ȥ"),
+ (0x225, "V"),
+ (0x226, "M", "ȧ"),
+ (0x227, "V"),
+ (0x228, "M", "ȩ"),
+ (0x229, "V"),
+ (0x22A, "M", "ȫ"),
+ (0x22B, "V"),
+ (0x22C, "M", "ȭ"),
+ (0x22D, "V"),
+ (0x22E, "M", "ȯ"),
+ (0x22F, "V"),
+ (0x230, "M", "ȱ"),
+ (0x231, "V"),
+ (0x232, "M", "ȳ"),
+ (0x233, "V"),
+ (0x23A, "M", "ⱥ"),
+ (0x23B, "M", "ȼ"),
+ (0x23C, "V"),
+ (0x23D, "M", "ƚ"),
+ (0x23E, "M", "ⱦ"),
+ (0x23F, "V"),
+ (0x241, "M", "ɂ"),
+ (0x242, "V"),
+ (0x243, "M", "ƀ"),
+ (0x244, "M", "ʉ"),
+ (0x245, "M", "ʌ"),
+ (0x246, "M", "ɇ"),
+ (0x247, "V"),
+ (0x248, "M", "ɉ"),
+ (0x249, "V"),
+ (0x24A, "M", "ɋ"),
+ (0x24B, "V"),
+ (0x24C, "M", "ɍ"),
+ (0x24D, "V"),
+ (0x24E, "M", "ɏ"),
+ (0x24F, "V"),
+ (0x2B0, "M", "h"),
+ (0x2B1, "M", "ɦ"),
+ (0x2B2, "M", "j"),
+ (0x2B3, "M", "r"),
+ (0x2B4, "M", "ɹ"),
+ (0x2B5, "M", "ɻ"),
+ (0x2B6, "M", "ʁ"),
+ (0x2B7, "M", "w"),
+ (0x2B8, "M", "y"),
+ (0x2B9, "V"),
+ (0x2D8, "3", " ̆"),
+ (0x2D9, "3", " ̇"),
+ (0x2DA, "3", " ̊"),
+ (0x2DB, "3", " ̨"),
+ (0x2DC, "3", " ̃"),
+ (0x2DD, "3", " ̋"),
+ (0x2DE, "V"),
+ (0x2E0, "M", "ɣ"),
+ (0x2E1, "M", "l"),
+ (0x2E2, "M", "s"),
+ (0x2E3, "M", "x"),
+ (0x2E4, "M", "ʕ"),
+ (0x2E5, "V"),
+ (0x340, "M", "̀"),
+ (0x341, "M", "́"),
+ (0x342, "V"),
+ (0x343, "M", "̓"),
+ (0x344, "M", "̈́"),
+ (0x345, "M", "ι"),
+ (0x346, "V"),
+ (0x34F, "I"),
+ (0x350, "V"),
+ (0x370, "M", "ͱ"),
+ (0x371, "V"),
+ (0x372, "M", "ͳ"),
+ (0x373, "V"),
+ (0x374, "M", "ʹ"),
+ (0x375, "V"),
+ (0x376, "M", "ͷ"),
+ (0x377, "V"),
+ ]
+
+
+def _seg_6() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x378, "X"),
+ (0x37A, "3", " ι"),
+ (0x37B, "V"),
+ (0x37E, "3", ";"),
+ (0x37F, "M", "ϳ"),
+ (0x380, "X"),
+ (0x384, "3", " ́"),
+ (0x385, "3", " ̈́"),
+ (0x386, "M", "ά"),
+ (0x387, "M", "·"),
+ (0x388, "M", "έ"),
+ (0x389, "M", "ή"),
+ (0x38A, "M", "ί"),
+ (0x38B, "X"),
+ (0x38C, "M", "ό"),
+ (0x38D, "X"),
+ (0x38E, "M", "ύ"),
+ (0x38F, "M", "ώ"),
+ (0x390, "V"),
+ (0x391, "M", "α"),
+ (0x392, "M", "β"),
+ (0x393, "M", "γ"),
+ (0x394, "M", "δ"),
+ (0x395, "M", "ε"),
+ (0x396, "M", "ζ"),
+ (0x397, "M", "η"),
+ (0x398, "M", "θ"),
+ (0x399, "M", "ι"),
+ (0x39A, "M", "κ"),
+ (0x39B, "M", "λ"),
+ (0x39C, "M", "μ"),
+ (0x39D, "M", "ν"),
+ (0x39E, "M", "ξ"),
+ (0x39F, "M", "ο"),
+ (0x3A0, "M", "π"),
+ (0x3A1, "M", "ρ"),
+ (0x3A2, "X"),
+ (0x3A3, "M", "σ"),
+ (0x3A4, "M", "τ"),
+ (0x3A5, "M", "υ"),
+ (0x3A6, "M", "φ"),
+ (0x3A7, "M", "χ"),
+ (0x3A8, "M", "ψ"),
+ (0x3A9, "M", "ω"),
+ (0x3AA, "M", "ϊ"),
+ (0x3AB, "M", "ϋ"),
+ (0x3AC, "V"),
+ (0x3C2, "D", "σ"),
+ (0x3C3, "V"),
+ (0x3CF, "M", "ϗ"),
+ (0x3D0, "M", "β"),
+ (0x3D1, "M", "θ"),
+ (0x3D2, "M", "υ"),
+ (0x3D3, "M", "ύ"),
+ (0x3D4, "M", "ϋ"),
+ (0x3D5, "M", "φ"),
+ (0x3D6, "M", "π"),
+ (0x3D7, "V"),
+ (0x3D8, "M", "ϙ"),
+ (0x3D9, "V"),
+ (0x3DA, "M", "ϛ"),
+ (0x3DB, "V"),
+ (0x3DC, "M", "ϝ"),
+ (0x3DD, "V"),
+ (0x3DE, "M", "ϟ"),
+ (0x3DF, "V"),
+ (0x3E0, "M", "ϡ"),
+ (0x3E1, "V"),
+ (0x3E2, "M", "ϣ"),
+ (0x3E3, "V"),
+ (0x3E4, "M", "ϥ"),
+ (0x3E5, "V"),
+ (0x3E6, "M", "ϧ"),
+ (0x3E7, "V"),
+ (0x3E8, "M", "ϩ"),
+ (0x3E9, "V"),
+ (0x3EA, "M", "ϫ"),
+ (0x3EB, "V"),
+ (0x3EC, "M", "ϭ"),
+ (0x3ED, "V"),
+ (0x3EE, "M", "ϯ"),
+ (0x3EF, "V"),
+ (0x3F0, "M", "κ"),
+ (0x3F1, "M", "ρ"),
+ (0x3F2, "M", "σ"),
+ (0x3F3, "V"),
+ (0x3F4, "M", "θ"),
+ (0x3F5, "M", "ε"),
+ (0x3F6, "V"),
+ (0x3F7, "M", "ϸ"),
+ (0x3F8, "V"),
+ (0x3F9, "M", "σ"),
+ (0x3FA, "M", "ϻ"),
+ (0x3FB, "V"),
+ (0x3FD, "M", "ͻ"),
+ (0x3FE, "M", "ͼ"),
+ (0x3FF, "M", "ͽ"),
+ (0x400, "M", "ѐ"),
+ (0x401, "M", "ё"),
+ (0x402, "M", "ђ"),
+ ]
+
+
+def _seg_7() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x403, "M", "ѓ"),
+ (0x404, "M", "є"),
+ (0x405, "M", "ѕ"),
+ (0x406, "M", "і"),
+ (0x407, "M", "ї"),
+ (0x408, "M", "ј"),
+ (0x409, "M", "љ"),
+ (0x40A, "M", "њ"),
+ (0x40B, "M", "ћ"),
+ (0x40C, "M", "ќ"),
+ (0x40D, "M", "ѝ"),
+ (0x40E, "M", "ў"),
+ (0x40F, "M", "џ"),
+ (0x410, "M", "а"),
+ (0x411, "M", "б"),
+ (0x412, "M", "в"),
+ (0x413, "M", "г"),
+ (0x414, "M", "д"),
+ (0x415, "M", "е"),
+ (0x416, "M", "ж"),
+ (0x417, "M", "з"),
+ (0x418, "M", "и"),
+ (0x419, "M", "й"),
+ (0x41A, "M", "к"),
+ (0x41B, "M", "л"),
+ (0x41C, "M", "м"),
+ (0x41D, "M", "н"),
+ (0x41E, "M", "о"),
+ (0x41F, "M", "п"),
+ (0x420, "M", "р"),
+ (0x421, "M", "с"),
+ (0x422, "M", "т"),
+ (0x423, "M", "у"),
+ (0x424, "M", "ф"),
+ (0x425, "M", "х"),
+ (0x426, "M", "ц"),
+ (0x427, "M", "ч"),
+ (0x428, "M", "ш"),
+ (0x429, "M", "щ"),
+ (0x42A, "M", "ъ"),
+ (0x42B, "M", "ы"),
+ (0x42C, "M", "ь"),
+ (0x42D, "M", "э"),
+ (0x42E, "M", "ю"),
+ (0x42F, "M", "я"),
+ (0x430, "V"),
+ (0x460, "M", "ѡ"),
+ (0x461, "V"),
+ (0x462, "M", "ѣ"),
+ (0x463, "V"),
+ (0x464, "M", "ѥ"),
+ (0x465, "V"),
+ (0x466, "M", "ѧ"),
+ (0x467, "V"),
+ (0x468, "M", "ѩ"),
+ (0x469, "V"),
+ (0x46A, "M", "ѫ"),
+ (0x46B, "V"),
+ (0x46C, "M", "ѭ"),
+ (0x46D, "V"),
+ (0x46E, "M", "ѯ"),
+ (0x46F, "V"),
+ (0x470, "M", "ѱ"),
+ (0x471, "V"),
+ (0x472, "M", "ѳ"),
+ (0x473, "V"),
+ (0x474, "M", "ѵ"),
+ (0x475, "V"),
+ (0x476, "M", "ѷ"),
+ (0x477, "V"),
+ (0x478, "M", "ѹ"),
+ (0x479, "V"),
+ (0x47A, "M", "ѻ"),
+ (0x47B, "V"),
+ (0x47C, "M", "ѽ"),
+ (0x47D, "V"),
+ (0x47E, "M", "ѿ"),
+ (0x47F, "V"),
+ (0x480, "M", "ҁ"),
+ (0x481, "V"),
+ (0x48A, "M", "ҋ"),
+ (0x48B, "V"),
+ (0x48C, "M", "ҍ"),
+ (0x48D, "V"),
+ (0x48E, "M", "ҏ"),
+ (0x48F, "V"),
+ (0x490, "M", "ґ"),
+ (0x491, "V"),
+ (0x492, "M", "ғ"),
+ (0x493, "V"),
+ (0x494, "M", "ҕ"),
+ (0x495, "V"),
+ (0x496, "M", "җ"),
+ (0x497, "V"),
+ (0x498, "M", "ҙ"),
+ (0x499, "V"),
+ (0x49A, "M", "қ"),
+ (0x49B, "V"),
+ (0x49C, "M", "ҝ"),
+ (0x49D, "V"),
+ ]
+
+
+def _seg_8() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x49E, "M", "ҟ"),
+ (0x49F, "V"),
+ (0x4A0, "M", "ҡ"),
+ (0x4A1, "V"),
+ (0x4A2, "M", "ң"),
+ (0x4A3, "V"),
+ (0x4A4, "M", "ҥ"),
+ (0x4A5, "V"),
+ (0x4A6, "M", "ҧ"),
+ (0x4A7, "V"),
+ (0x4A8, "M", "ҩ"),
+ (0x4A9, "V"),
+ (0x4AA, "M", "ҫ"),
+ (0x4AB, "V"),
+ (0x4AC, "M", "ҭ"),
+ (0x4AD, "V"),
+ (0x4AE, "M", "ү"),
+ (0x4AF, "V"),
+ (0x4B0, "M", "ұ"),
+ (0x4B1, "V"),
+ (0x4B2, "M", "ҳ"),
+ (0x4B3, "V"),
+ (0x4B4, "M", "ҵ"),
+ (0x4B5, "V"),
+ (0x4B6, "M", "ҷ"),
+ (0x4B7, "V"),
+ (0x4B8, "M", "ҹ"),
+ (0x4B9, "V"),
+ (0x4BA, "M", "һ"),
+ (0x4BB, "V"),
+ (0x4BC, "M", "ҽ"),
+ (0x4BD, "V"),
+ (0x4BE, "M", "ҿ"),
+ (0x4BF, "V"),
+ (0x4C0, "X"),
+ (0x4C1, "M", "ӂ"),
+ (0x4C2, "V"),
+ (0x4C3, "M", "ӄ"),
+ (0x4C4, "V"),
+ (0x4C5, "M", "ӆ"),
+ (0x4C6, "V"),
+ (0x4C7, "M", "ӈ"),
+ (0x4C8, "V"),
+ (0x4C9, "M", "ӊ"),
+ (0x4CA, "V"),
+ (0x4CB, "M", "ӌ"),
+ (0x4CC, "V"),
+ (0x4CD, "M", "ӎ"),
+ (0x4CE, "V"),
+ (0x4D0, "M", "ӑ"),
+ (0x4D1, "V"),
+ (0x4D2, "M", "ӓ"),
+ (0x4D3, "V"),
+ (0x4D4, "M", "ӕ"),
+ (0x4D5, "V"),
+ (0x4D6, "M", "ӗ"),
+ (0x4D7, "V"),
+ (0x4D8, "M", "ә"),
+ (0x4D9, "V"),
+ (0x4DA, "M", "ӛ"),
+ (0x4DB, "V"),
+ (0x4DC, "M", "ӝ"),
+ (0x4DD, "V"),
+ (0x4DE, "M", "ӟ"),
+ (0x4DF, "V"),
+ (0x4E0, "M", "ӡ"),
+ (0x4E1, "V"),
+ (0x4E2, "M", "ӣ"),
+ (0x4E3, "V"),
+ (0x4E4, "M", "ӥ"),
+ (0x4E5, "V"),
+ (0x4E6, "M", "ӧ"),
+ (0x4E7, "V"),
+ (0x4E8, "M", "ө"),
+ (0x4E9, "V"),
+ (0x4EA, "M", "ӫ"),
+ (0x4EB, "V"),
+ (0x4EC, "M", "ӭ"),
+ (0x4ED, "V"),
+ (0x4EE, "M", "ӯ"),
+ (0x4EF, "V"),
+ (0x4F0, "M", "ӱ"),
+ (0x4F1, "V"),
+ (0x4F2, "M", "ӳ"),
+ (0x4F3, "V"),
+ (0x4F4, "M", "ӵ"),
+ (0x4F5, "V"),
+ (0x4F6, "M", "ӷ"),
+ (0x4F7, "V"),
+ (0x4F8, "M", "ӹ"),
+ (0x4F9, "V"),
+ (0x4FA, "M", "ӻ"),
+ (0x4FB, "V"),
+ (0x4FC, "M", "ӽ"),
+ (0x4FD, "V"),
+ (0x4FE, "M", "ӿ"),
+ (0x4FF, "V"),
+ (0x500, "M", "ԁ"),
+ (0x501, "V"),
+ (0x502, "M", "ԃ"),
+ ]
+
+
+def _seg_9() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x503, "V"),
+ (0x504, "M", "ԅ"),
+ (0x505, "V"),
+ (0x506, "M", "ԇ"),
+ (0x507, "V"),
+ (0x508, "M", "ԉ"),
+ (0x509, "V"),
+ (0x50A, "M", "ԋ"),
+ (0x50B, "V"),
+ (0x50C, "M", "ԍ"),
+ (0x50D, "V"),
+ (0x50E, "M", "ԏ"),
+ (0x50F, "V"),
+ (0x510, "M", "ԑ"),
+ (0x511, "V"),
+ (0x512, "M", "ԓ"),
+ (0x513, "V"),
+ (0x514, "M", "ԕ"),
+ (0x515, "V"),
+ (0x516, "M", "ԗ"),
+ (0x517, "V"),
+ (0x518, "M", "ԙ"),
+ (0x519, "V"),
+ (0x51A, "M", "ԛ"),
+ (0x51B, "V"),
+ (0x51C, "M", "ԝ"),
+ (0x51D, "V"),
+ (0x51E, "M", "ԟ"),
+ (0x51F, "V"),
+ (0x520, "M", "ԡ"),
+ (0x521, "V"),
+ (0x522, "M", "ԣ"),
+ (0x523, "V"),
+ (0x524, "M", "ԥ"),
+ (0x525, "V"),
+ (0x526, "M", "ԧ"),
+ (0x527, "V"),
+ (0x528, "M", "ԩ"),
+ (0x529, "V"),
+ (0x52A, "M", "ԫ"),
+ (0x52B, "V"),
+ (0x52C, "M", "ԭ"),
+ (0x52D, "V"),
+ (0x52E, "M", "ԯ"),
+ (0x52F, "V"),
+ (0x530, "X"),
+ (0x531, "M", "ա"),
+ (0x532, "M", "բ"),
+ (0x533, "M", "գ"),
+ (0x534, "M", "դ"),
+ (0x535, "M", "ե"),
+ (0x536, "M", "զ"),
+ (0x537, "M", "է"),
+ (0x538, "M", "ը"),
+ (0x539, "M", "թ"),
+ (0x53A, "M", "ժ"),
+ (0x53B, "M", "ի"),
+ (0x53C, "M", "լ"),
+ (0x53D, "M", "խ"),
+ (0x53E, "M", "ծ"),
+ (0x53F, "M", "կ"),
+ (0x540, "M", "հ"),
+ (0x541, "M", "ձ"),
+ (0x542, "M", "ղ"),
+ (0x543, "M", "ճ"),
+ (0x544, "M", "մ"),
+ (0x545, "M", "յ"),
+ (0x546, "M", "ն"),
+ (0x547, "M", "շ"),
+ (0x548, "M", "ո"),
+ (0x549, "M", "չ"),
+ (0x54A, "M", "պ"),
+ (0x54B, "M", "ջ"),
+ (0x54C, "M", "ռ"),
+ (0x54D, "M", "ս"),
+ (0x54E, "M", "վ"),
+ (0x54F, "M", "տ"),
+ (0x550, "M", "ր"),
+ (0x551, "M", "ց"),
+ (0x552, "M", "ւ"),
+ (0x553, "M", "փ"),
+ (0x554, "M", "ք"),
+ (0x555, "M", "օ"),
+ (0x556, "M", "ֆ"),
+ (0x557, "X"),
+ (0x559, "V"),
+ (0x587, "M", "եւ"),
+ (0x588, "V"),
+ (0x58B, "X"),
+ (0x58D, "V"),
+ (0x590, "X"),
+ (0x591, "V"),
+ (0x5C8, "X"),
+ (0x5D0, "V"),
+ (0x5EB, "X"),
+ (0x5EF, "V"),
+ (0x5F5, "X"),
+ (0x606, "V"),
+ (0x61C, "X"),
+ (0x61D, "V"),
+ ]
+
+
+def _seg_10() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x675, "M", "اٴ"),
+ (0x676, "M", "وٴ"),
+ (0x677, "M", "ۇٴ"),
+ (0x678, "M", "يٴ"),
+ (0x679, "V"),
+ (0x6DD, "X"),
+ (0x6DE, "V"),
+ (0x70E, "X"),
+ (0x710, "V"),
+ (0x74B, "X"),
+ (0x74D, "V"),
+ (0x7B2, "X"),
+ (0x7C0, "V"),
+ (0x7FB, "X"),
+ (0x7FD, "V"),
+ (0x82E, "X"),
+ (0x830, "V"),
+ (0x83F, "X"),
+ (0x840, "V"),
+ (0x85C, "X"),
+ (0x85E, "V"),
+ (0x85F, "X"),
+ (0x860, "V"),
+ (0x86B, "X"),
+ (0x870, "V"),
+ (0x88F, "X"),
+ (0x898, "V"),
+ (0x8E2, "X"),
+ (0x8E3, "V"),
+ (0x958, "M", "क़"),
+ (0x959, "M", "ख़"),
+ (0x95A, "M", "ग़"),
+ (0x95B, "M", "ज़"),
+ (0x95C, "M", "ड़"),
+ (0x95D, "M", "ढ़"),
+ (0x95E, "M", "फ़"),
+ (0x95F, "M", "य़"),
+ (0x960, "V"),
+ (0x984, "X"),
+ (0x985, "V"),
+ (0x98D, "X"),
+ (0x98F, "V"),
+ (0x991, "X"),
+ (0x993, "V"),
+ (0x9A9, "X"),
+ (0x9AA, "V"),
+ (0x9B1, "X"),
+ (0x9B2, "V"),
+ (0x9B3, "X"),
+ (0x9B6, "V"),
+ (0x9BA, "X"),
+ (0x9BC, "V"),
+ (0x9C5, "X"),
+ (0x9C7, "V"),
+ (0x9C9, "X"),
+ (0x9CB, "V"),
+ (0x9CF, "X"),
+ (0x9D7, "V"),
+ (0x9D8, "X"),
+ (0x9DC, "M", "ড়"),
+ (0x9DD, "M", "ঢ়"),
+ (0x9DE, "X"),
+ (0x9DF, "M", "য়"),
+ (0x9E0, "V"),
+ (0x9E4, "X"),
+ (0x9E6, "V"),
+ (0x9FF, "X"),
+ (0xA01, "V"),
+ (0xA04, "X"),
+ (0xA05, "V"),
+ (0xA0B, "X"),
+ (0xA0F, "V"),
+ (0xA11, "X"),
+ (0xA13, "V"),
+ (0xA29, "X"),
+ (0xA2A, "V"),
+ (0xA31, "X"),
+ (0xA32, "V"),
+ (0xA33, "M", "ਲ਼"),
+ (0xA34, "X"),
+ (0xA35, "V"),
+ (0xA36, "M", "ਸ਼"),
+ (0xA37, "X"),
+ (0xA38, "V"),
+ (0xA3A, "X"),
+ (0xA3C, "V"),
+ (0xA3D, "X"),
+ (0xA3E, "V"),
+ (0xA43, "X"),
+ (0xA47, "V"),
+ (0xA49, "X"),
+ (0xA4B, "V"),
+ (0xA4E, "X"),
+ (0xA51, "V"),
+ (0xA52, "X"),
+ (0xA59, "M", "ਖ਼"),
+ (0xA5A, "M", "ਗ਼"),
+ (0xA5B, "M", "ਜ਼"),
+ (0xA5C, "V"),
+ (0xA5D, "X"),
+ ]
+
+
+def _seg_11() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xA5E, "M", "ਫ਼"),
+ (0xA5F, "X"),
+ (0xA66, "V"),
+ (0xA77, "X"),
+ (0xA81, "V"),
+ (0xA84, "X"),
+ (0xA85, "V"),
+ (0xA8E, "X"),
+ (0xA8F, "V"),
+ (0xA92, "X"),
+ (0xA93, "V"),
+ (0xAA9, "X"),
+ (0xAAA, "V"),
+ (0xAB1, "X"),
+ (0xAB2, "V"),
+ (0xAB4, "X"),
+ (0xAB5, "V"),
+ (0xABA, "X"),
+ (0xABC, "V"),
+ (0xAC6, "X"),
+ (0xAC7, "V"),
+ (0xACA, "X"),
+ (0xACB, "V"),
+ (0xACE, "X"),
+ (0xAD0, "V"),
+ (0xAD1, "X"),
+ (0xAE0, "V"),
+ (0xAE4, "X"),
+ (0xAE6, "V"),
+ (0xAF2, "X"),
+ (0xAF9, "V"),
+ (0xB00, "X"),
+ (0xB01, "V"),
+ (0xB04, "X"),
+ (0xB05, "V"),
+ (0xB0D, "X"),
+ (0xB0F, "V"),
+ (0xB11, "X"),
+ (0xB13, "V"),
+ (0xB29, "X"),
+ (0xB2A, "V"),
+ (0xB31, "X"),
+ (0xB32, "V"),
+ (0xB34, "X"),
+ (0xB35, "V"),
+ (0xB3A, "X"),
+ (0xB3C, "V"),
+ (0xB45, "X"),
+ (0xB47, "V"),
+ (0xB49, "X"),
+ (0xB4B, "V"),
+ (0xB4E, "X"),
+ (0xB55, "V"),
+ (0xB58, "X"),
+ (0xB5C, "M", "ଡ଼"),
+ (0xB5D, "M", "ଢ଼"),
+ (0xB5E, "X"),
+ (0xB5F, "V"),
+ (0xB64, "X"),
+ (0xB66, "V"),
+ (0xB78, "X"),
+ (0xB82, "V"),
+ (0xB84, "X"),
+ (0xB85, "V"),
+ (0xB8B, "X"),
+ (0xB8E, "V"),
+ (0xB91, "X"),
+ (0xB92, "V"),
+ (0xB96, "X"),
+ (0xB99, "V"),
+ (0xB9B, "X"),
+ (0xB9C, "V"),
+ (0xB9D, "X"),
+ (0xB9E, "V"),
+ (0xBA0, "X"),
+ (0xBA3, "V"),
+ (0xBA5, "X"),
+ (0xBA8, "V"),
+ (0xBAB, "X"),
+ (0xBAE, "V"),
+ (0xBBA, "X"),
+ (0xBBE, "V"),
+ (0xBC3, "X"),
+ (0xBC6, "V"),
+ (0xBC9, "X"),
+ (0xBCA, "V"),
+ (0xBCE, "X"),
+ (0xBD0, "V"),
+ (0xBD1, "X"),
+ (0xBD7, "V"),
+ (0xBD8, "X"),
+ (0xBE6, "V"),
+ (0xBFB, "X"),
+ (0xC00, "V"),
+ (0xC0D, "X"),
+ (0xC0E, "V"),
+ (0xC11, "X"),
+ (0xC12, "V"),
+ (0xC29, "X"),
+ (0xC2A, "V"),
+ ]
+
+
+def _seg_12() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xC3A, "X"),
+ (0xC3C, "V"),
+ (0xC45, "X"),
+ (0xC46, "V"),
+ (0xC49, "X"),
+ (0xC4A, "V"),
+ (0xC4E, "X"),
+ (0xC55, "V"),
+ (0xC57, "X"),
+ (0xC58, "V"),
+ (0xC5B, "X"),
+ (0xC5D, "V"),
+ (0xC5E, "X"),
+ (0xC60, "V"),
+ (0xC64, "X"),
+ (0xC66, "V"),
+ (0xC70, "X"),
+ (0xC77, "V"),
+ (0xC8D, "X"),
+ (0xC8E, "V"),
+ (0xC91, "X"),
+ (0xC92, "V"),
+ (0xCA9, "X"),
+ (0xCAA, "V"),
+ (0xCB4, "X"),
+ (0xCB5, "V"),
+ (0xCBA, "X"),
+ (0xCBC, "V"),
+ (0xCC5, "X"),
+ (0xCC6, "V"),
+ (0xCC9, "X"),
+ (0xCCA, "V"),
+ (0xCCE, "X"),
+ (0xCD5, "V"),
+ (0xCD7, "X"),
+ (0xCDD, "V"),
+ (0xCDF, "X"),
+ (0xCE0, "V"),
+ (0xCE4, "X"),
+ (0xCE6, "V"),
+ (0xCF0, "X"),
+ (0xCF1, "V"),
+ (0xCF4, "X"),
+ (0xD00, "V"),
+ (0xD0D, "X"),
+ (0xD0E, "V"),
+ (0xD11, "X"),
+ (0xD12, "V"),
+ (0xD45, "X"),
+ (0xD46, "V"),
+ (0xD49, "X"),
+ (0xD4A, "V"),
+ (0xD50, "X"),
+ (0xD54, "V"),
+ (0xD64, "X"),
+ (0xD66, "V"),
+ (0xD80, "X"),
+ (0xD81, "V"),
+ (0xD84, "X"),
+ (0xD85, "V"),
+ (0xD97, "X"),
+ (0xD9A, "V"),
+ (0xDB2, "X"),
+ (0xDB3, "V"),
+ (0xDBC, "X"),
+ (0xDBD, "V"),
+ (0xDBE, "X"),
+ (0xDC0, "V"),
+ (0xDC7, "X"),
+ (0xDCA, "V"),
+ (0xDCB, "X"),
+ (0xDCF, "V"),
+ (0xDD5, "X"),
+ (0xDD6, "V"),
+ (0xDD7, "X"),
+ (0xDD8, "V"),
+ (0xDE0, "X"),
+ (0xDE6, "V"),
+ (0xDF0, "X"),
+ (0xDF2, "V"),
+ (0xDF5, "X"),
+ (0xE01, "V"),
+ (0xE33, "M", "ํา"),
+ (0xE34, "V"),
+ (0xE3B, "X"),
+ (0xE3F, "V"),
+ (0xE5C, "X"),
+ (0xE81, "V"),
+ (0xE83, "X"),
+ (0xE84, "V"),
+ (0xE85, "X"),
+ (0xE86, "V"),
+ (0xE8B, "X"),
+ (0xE8C, "V"),
+ (0xEA4, "X"),
+ (0xEA5, "V"),
+ (0xEA6, "X"),
+ (0xEA7, "V"),
+ (0xEB3, "M", "ໍາ"),
+ (0xEB4, "V"),
+ ]
+
+
+def _seg_13() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xEBE, "X"),
+ (0xEC0, "V"),
+ (0xEC5, "X"),
+ (0xEC6, "V"),
+ (0xEC7, "X"),
+ (0xEC8, "V"),
+ (0xECF, "X"),
+ (0xED0, "V"),
+ (0xEDA, "X"),
+ (0xEDC, "M", "ຫນ"),
+ (0xEDD, "M", "ຫມ"),
+ (0xEDE, "V"),
+ (0xEE0, "X"),
+ (0xF00, "V"),
+ (0xF0C, "M", "་"),
+ (0xF0D, "V"),
+ (0xF43, "M", "གྷ"),
+ (0xF44, "V"),
+ (0xF48, "X"),
+ (0xF49, "V"),
+ (0xF4D, "M", "ཌྷ"),
+ (0xF4E, "V"),
+ (0xF52, "M", "དྷ"),
+ (0xF53, "V"),
+ (0xF57, "M", "བྷ"),
+ (0xF58, "V"),
+ (0xF5C, "M", "ཛྷ"),
+ (0xF5D, "V"),
+ (0xF69, "M", "ཀྵ"),
+ (0xF6A, "V"),
+ (0xF6D, "X"),
+ (0xF71, "V"),
+ (0xF73, "M", "ཱི"),
+ (0xF74, "V"),
+ (0xF75, "M", "ཱུ"),
+ (0xF76, "M", "ྲྀ"),
+ (0xF77, "M", "ྲཱྀ"),
+ (0xF78, "M", "ླྀ"),
+ (0xF79, "M", "ླཱྀ"),
+ (0xF7A, "V"),
+ (0xF81, "M", "ཱྀ"),
+ (0xF82, "V"),
+ (0xF93, "M", "ྒྷ"),
+ (0xF94, "V"),
+ (0xF98, "X"),
+ (0xF99, "V"),
+ (0xF9D, "M", "ྜྷ"),
+ (0xF9E, "V"),
+ (0xFA2, "M", "ྡྷ"),
+ (0xFA3, "V"),
+ (0xFA7, "M", "ྦྷ"),
+ (0xFA8, "V"),
+ (0xFAC, "M", "ྫྷ"),
+ (0xFAD, "V"),
+ (0xFB9, "M", "ྐྵ"),
+ (0xFBA, "V"),
+ (0xFBD, "X"),
+ (0xFBE, "V"),
+ (0xFCD, "X"),
+ (0xFCE, "V"),
+ (0xFDB, "X"),
+ (0x1000, "V"),
+ (0x10A0, "X"),
+ (0x10C7, "M", "ⴧ"),
+ (0x10C8, "X"),
+ (0x10CD, "M", "ⴭ"),
+ (0x10CE, "X"),
+ (0x10D0, "V"),
+ (0x10FC, "M", "ნ"),
+ (0x10FD, "V"),
+ (0x115F, "X"),
+ (0x1161, "V"),
+ (0x1249, "X"),
+ (0x124A, "V"),
+ (0x124E, "X"),
+ (0x1250, "V"),
+ (0x1257, "X"),
+ (0x1258, "V"),
+ (0x1259, "X"),
+ (0x125A, "V"),
+ (0x125E, "X"),
+ (0x1260, "V"),
+ (0x1289, "X"),
+ (0x128A, "V"),
+ (0x128E, "X"),
+ (0x1290, "V"),
+ (0x12B1, "X"),
+ (0x12B2, "V"),
+ (0x12B6, "X"),
+ (0x12B8, "V"),
+ (0x12BF, "X"),
+ (0x12C0, "V"),
+ (0x12C1, "X"),
+ (0x12C2, "V"),
+ (0x12C6, "X"),
+ (0x12C8, "V"),
+ (0x12D7, "X"),
+ (0x12D8, "V"),
+ (0x1311, "X"),
+ (0x1312, "V"),
+ ]
+
+
+def _seg_14() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1316, "X"),
+ (0x1318, "V"),
+ (0x135B, "X"),
+ (0x135D, "V"),
+ (0x137D, "X"),
+ (0x1380, "V"),
+ (0x139A, "X"),
+ (0x13A0, "V"),
+ (0x13F6, "X"),
+ (0x13F8, "M", "Ᏸ"),
+ (0x13F9, "M", "Ᏹ"),
+ (0x13FA, "M", "Ᏺ"),
+ (0x13FB, "M", "Ᏻ"),
+ (0x13FC, "M", "Ᏼ"),
+ (0x13FD, "M", "Ᏽ"),
+ (0x13FE, "X"),
+ (0x1400, "V"),
+ (0x1680, "X"),
+ (0x1681, "V"),
+ (0x169D, "X"),
+ (0x16A0, "V"),
+ (0x16F9, "X"),
+ (0x1700, "V"),
+ (0x1716, "X"),
+ (0x171F, "V"),
+ (0x1737, "X"),
+ (0x1740, "V"),
+ (0x1754, "X"),
+ (0x1760, "V"),
+ (0x176D, "X"),
+ (0x176E, "V"),
+ (0x1771, "X"),
+ (0x1772, "V"),
+ (0x1774, "X"),
+ (0x1780, "V"),
+ (0x17B4, "X"),
+ (0x17B6, "V"),
+ (0x17DE, "X"),
+ (0x17E0, "V"),
+ (0x17EA, "X"),
+ (0x17F0, "V"),
+ (0x17FA, "X"),
+ (0x1800, "V"),
+ (0x1806, "X"),
+ (0x1807, "V"),
+ (0x180B, "I"),
+ (0x180E, "X"),
+ (0x180F, "I"),
+ (0x1810, "V"),
+ (0x181A, "X"),
+ (0x1820, "V"),
+ (0x1879, "X"),
+ (0x1880, "V"),
+ (0x18AB, "X"),
+ (0x18B0, "V"),
+ (0x18F6, "X"),
+ (0x1900, "V"),
+ (0x191F, "X"),
+ (0x1920, "V"),
+ (0x192C, "X"),
+ (0x1930, "V"),
+ (0x193C, "X"),
+ (0x1940, "V"),
+ (0x1941, "X"),
+ (0x1944, "V"),
+ (0x196E, "X"),
+ (0x1970, "V"),
+ (0x1975, "X"),
+ (0x1980, "V"),
+ (0x19AC, "X"),
+ (0x19B0, "V"),
+ (0x19CA, "X"),
+ (0x19D0, "V"),
+ (0x19DB, "X"),
+ (0x19DE, "V"),
+ (0x1A1C, "X"),
+ (0x1A1E, "V"),
+ (0x1A5F, "X"),
+ (0x1A60, "V"),
+ (0x1A7D, "X"),
+ (0x1A7F, "V"),
+ (0x1A8A, "X"),
+ (0x1A90, "V"),
+ (0x1A9A, "X"),
+ (0x1AA0, "V"),
+ (0x1AAE, "X"),
+ (0x1AB0, "V"),
+ (0x1ACF, "X"),
+ (0x1B00, "V"),
+ (0x1B4D, "X"),
+ (0x1B50, "V"),
+ (0x1B7F, "X"),
+ (0x1B80, "V"),
+ (0x1BF4, "X"),
+ (0x1BFC, "V"),
+ (0x1C38, "X"),
+ (0x1C3B, "V"),
+ (0x1C4A, "X"),
+ (0x1C4D, "V"),
+ (0x1C80, "M", "в"),
+ ]
+
+
+def _seg_15() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1C81, "M", "д"),
+ (0x1C82, "M", "о"),
+ (0x1C83, "M", "с"),
+ (0x1C84, "M", "т"),
+ (0x1C86, "M", "ъ"),
+ (0x1C87, "M", "ѣ"),
+ (0x1C88, "M", "ꙋ"),
+ (0x1C89, "X"),
+ (0x1C90, "M", "ა"),
+ (0x1C91, "M", "ბ"),
+ (0x1C92, "M", "გ"),
+ (0x1C93, "M", "დ"),
+ (0x1C94, "M", "ე"),
+ (0x1C95, "M", "ვ"),
+ (0x1C96, "M", "ზ"),
+ (0x1C97, "M", "თ"),
+ (0x1C98, "M", "ი"),
+ (0x1C99, "M", "კ"),
+ (0x1C9A, "M", "ლ"),
+ (0x1C9B, "M", "მ"),
+ (0x1C9C, "M", "ნ"),
+ (0x1C9D, "M", "ო"),
+ (0x1C9E, "M", "პ"),
+ (0x1C9F, "M", "ჟ"),
+ (0x1CA0, "M", "რ"),
+ (0x1CA1, "M", "ს"),
+ (0x1CA2, "M", "ტ"),
+ (0x1CA3, "M", "უ"),
+ (0x1CA4, "M", "ფ"),
+ (0x1CA5, "M", "ქ"),
+ (0x1CA6, "M", "ღ"),
+ (0x1CA7, "M", "ყ"),
+ (0x1CA8, "M", "შ"),
+ (0x1CA9, "M", "ჩ"),
+ (0x1CAA, "M", "ც"),
+ (0x1CAB, "M", "ძ"),
+ (0x1CAC, "M", "წ"),
+ (0x1CAD, "M", "ჭ"),
+ (0x1CAE, "M", "ხ"),
+ (0x1CAF, "M", "ჯ"),
+ (0x1CB0, "M", "ჰ"),
+ (0x1CB1, "M", "ჱ"),
+ (0x1CB2, "M", "ჲ"),
+ (0x1CB3, "M", "ჳ"),
+ (0x1CB4, "M", "ჴ"),
+ (0x1CB5, "M", "ჵ"),
+ (0x1CB6, "M", "ჶ"),
+ (0x1CB7, "M", "ჷ"),
+ (0x1CB8, "M", "ჸ"),
+ (0x1CB9, "M", "ჹ"),
+ (0x1CBA, "M", "ჺ"),
+ (0x1CBB, "X"),
+ (0x1CBD, "M", "ჽ"),
+ (0x1CBE, "M", "ჾ"),
+ (0x1CBF, "M", "ჿ"),
+ (0x1CC0, "V"),
+ (0x1CC8, "X"),
+ (0x1CD0, "V"),
+ (0x1CFB, "X"),
+ (0x1D00, "V"),
+ (0x1D2C, "M", "a"),
+ (0x1D2D, "M", "æ"),
+ (0x1D2E, "M", "b"),
+ (0x1D2F, "V"),
+ (0x1D30, "M", "d"),
+ (0x1D31, "M", "e"),
+ (0x1D32, "M", "ǝ"),
+ (0x1D33, "M", "g"),
+ (0x1D34, "M", "h"),
+ (0x1D35, "M", "i"),
+ (0x1D36, "M", "j"),
+ (0x1D37, "M", "k"),
+ (0x1D38, "M", "l"),
+ (0x1D39, "M", "m"),
+ (0x1D3A, "M", "n"),
+ (0x1D3B, "V"),
+ (0x1D3C, "M", "o"),
+ (0x1D3D, "M", "ȣ"),
+ (0x1D3E, "M", "p"),
+ (0x1D3F, "M", "r"),
+ (0x1D40, "M", "t"),
+ (0x1D41, "M", "u"),
+ (0x1D42, "M", "w"),
+ (0x1D43, "M", "a"),
+ (0x1D44, "M", "ɐ"),
+ (0x1D45, "M", "ɑ"),
+ (0x1D46, "M", "ᴂ"),
+ (0x1D47, "M", "b"),
+ (0x1D48, "M", "d"),
+ (0x1D49, "M", "e"),
+ (0x1D4A, "M", "ə"),
+ (0x1D4B, "M", "ɛ"),
+ (0x1D4C, "M", "ɜ"),
+ (0x1D4D, "M", "g"),
+ (0x1D4E, "V"),
+ (0x1D4F, "M", "k"),
+ (0x1D50, "M", "m"),
+ (0x1D51, "M", "ŋ"),
+ (0x1D52, "M", "o"),
+ (0x1D53, "M", "ɔ"),
+ ]
+
+
+def _seg_16() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D54, "M", "ᴖ"),
+ (0x1D55, "M", "ᴗ"),
+ (0x1D56, "M", "p"),
+ (0x1D57, "M", "t"),
+ (0x1D58, "M", "u"),
+ (0x1D59, "M", "ᴝ"),
+ (0x1D5A, "M", "ɯ"),
+ (0x1D5B, "M", "v"),
+ (0x1D5C, "M", "ᴥ"),
+ (0x1D5D, "M", "β"),
+ (0x1D5E, "M", "γ"),
+ (0x1D5F, "M", "δ"),
+ (0x1D60, "M", "φ"),
+ (0x1D61, "M", "χ"),
+ (0x1D62, "M", "i"),
+ (0x1D63, "M", "r"),
+ (0x1D64, "M", "u"),
+ (0x1D65, "M", "v"),
+ (0x1D66, "M", "β"),
+ (0x1D67, "M", "γ"),
+ (0x1D68, "M", "ρ"),
+ (0x1D69, "M", "φ"),
+ (0x1D6A, "M", "χ"),
+ (0x1D6B, "V"),
+ (0x1D78, "M", "н"),
+ (0x1D79, "V"),
+ (0x1D9B, "M", "ɒ"),
+ (0x1D9C, "M", "c"),
+ (0x1D9D, "M", "ɕ"),
+ (0x1D9E, "M", "ð"),
+ (0x1D9F, "M", "ɜ"),
+ (0x1DA0, "M", "f"),
+ (0x1DA1, "M", "ɟ"),
+ (0x1DA2, "M", "ɡ"),
+ (0x1DA3, "M", "ɥ"),
+ (0x1DA4, "M", "ɨ"),
+ (0x1DA5, "M", "ɩ"),
+ (0x1DA6, "M", "ɪ"),
+ (0x1DA7, "M", "ᵻ"),
+ (0x1DA8, "M", "ʝ"),
+ (0x1DA9, "M", "ɭ"),
+ (0x1DAA, "M", "ᶅ"),
+ (0x1DAB, "M", "ʟ"),
+ (0x1DAC, "M", "ɱ"),
+ (0x1DAD, "M", "ɰ"),
+ (0x1DAE, "M", "ɲ"),
+ (0x1DAF, "M", "ɳ"),
+ (0x1DB0, "M", "ɴ"),
+ (0x1DB1, "M", "ɵ"),
+ (0x1DB2, "M", "ɸ"),
+ (0x1DB3, "M", "ʂ"),
+ (0x1DB4, "M", "ʃ"),
+ (0x1DB5, "M", "ƫ"),
+ (0x1DB6, "M", "ʉ"),
+ (0x1DB7, "M", "ʊ"),
+ (0x1DB8, "M", "ᴜ"),
+ (0x1DB9, "M", "ʋ"),
+ (0x1DBA, "M", "ʌ"),
+ (0x1DBB, "M", "z"),
+ (0x1DBC, "M", "ʐ"),
+ (0x1DBD, "M", "ʑ"),
+ (0x1DBE, "M", "ʒ"),
+ (0x1DBF, "M", "θ"),
+ (0x1DC0, "V"),
+ (0x1E00, "M", "ḁ"),
+ (0x1E01, "V"),
+ (0x1E02, "M", "ḃ"),
+ (0x1E03, "V"),
+ (0x1E04, "M", "ḅ"),
+ (0x1E05, "V"),
+ (0x1E06, "M", "ḇ"),
+ (0x1E07, "V"),
+ (0x1E08, "M", "ḉ"),
+ (0x1E09, "V"),
+ (0x1E0A, "M", "ḋ"),
+ (0x1E0B, "V"),
+ (0x1E0C, "M", "ḍ"),
+ (0x1E0D, "V"),
+ (0x1E0E, "M", "ḏ"),
+ (0x1E0F, "V"),
+ (0x1E10, "M", "ḑ"),
+ (0x1E11, "V"),
+ (0x1E12, "M", "ḓ"),
+ (0x1E13, "V"),
+ (0x1E14, "M", "ḕ"),
+ (0x1E15, "V"),
+ (0x1E16, "M", "ḗ"),
+ (0x1E17, "V"),
+ (0x1E18, "M", "ḙ"),
+ (0x1E19, "V"),
+ (0x1E1A, "M", "ḛ"),
+ (0x1E1B, "V"),
+ (0x1E1C, "M", "ḝ"),
+ (0x1E1D, "V"),
+ (0x1E1E, "M", "ḟ"),
+ (0x1E1F, "V"),
+ (0x1E20, "M", "ḡ"),
+ (0x1E21, "V"),
+ (0x1E22, "M", "ḣ"),
+ (0x1E23, "V"),
+ ]
+
+
+def _seg_17() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1E24, "M", "ḥ"),
+ (0x1E25, "V"),
+ (0x1E26, "M", "ḧ"),
+ (0x1E27, "V"),
+ (0x1E28, "M", "ḩ"),
+ (0x1E29, "V"),
+ (0x1E2A, "M", "ḫ"),
+ (0x1E2B, "V"),
+ (0x1E2C, "M", "ḭ"),
+ (0x1E2D, "V"),
+ (0x1E2E, "M", "ḯ"),
+ (0x1E2F, "V"),
+ (0x1E30, "M", "ḱ"),
+ (0x1E31, "V"),
+ (0x1E32, "M", "ḳ"),
+ (0x1E33, "V"),
+ (0x1E34, "M", "ḵ"),
+ (0x1E35, "V"),
+ (0x1E36, "M", "ḷ"),
+ (0x1E37, "V"),
+ (0x1E38, "M", "ḹ"),
+ (0x1E39, "V"),
+ (0x1E3A, "M", "ḻ"),
+ (0x1E3B, "V"),
+ (0x1E3C, "M", "ḽ"),
+ (0x1E3D, "V"),
+ (0x1E3E, "M", "ḿ"),
+ (0x1E3F, "V"),
+ (0x1E40, "M", "ṁ"),
+ (0x1E41, "V"),
+ (0x1E42, "M", "ṃ"),
+ (0x1E43, "V"),
+ (0x1E44, "M", "ṅ"),
+ (0x1E45, "V"),
+ (0x1E46, "M", "ṇ"),
+ (0x1E47, "V"),
+ (0x1E48, "M", "ṉ"),
+ (0x1E49, "V"),
+ (0x1E4A, "M", "ṋ"),
+ (0x1E4B, "V"),
+ (0x1E4C, "M", "ṍ"),
+ (0x1E4D, "V"),
+ (0x1E4E, "M", "ṏ"),
+ (0x1E4F, "V"),
+ (0x1E50, "M", "ṑ"),
+ (0x1E51, "V"),
+ (0x1E52, "M", "ṓ"),
+ (0x1E53, "V"),
+ (0x1E54, "M", "ṕ"),
+ (0x1E55, "V"),
+ (0x1E56, "M", "ṗ"),
+ (0x1E57, "V"),
+ (0x1E58, "M", "ṙ"),
+ (0x1E59, "V"),
+ (0x1E5A, "M", "ṛ"),
+ (0x1E5B, "V"),
+ (0x1E5C, "M", "ṝ"),
+ (0x1E5D, "V"),
+ (0x1E5E, "M", "ṟ"),
+ (0x1E5F, "V"),
+ (0x1E60, "M", "ṡ"),
+ (0x1E61, "V"),
+ (0x1E62, "M", "ṣ"),
+ (0x1E63, "V"),
+ (0x1E64, "M", "ṥ"),
+ (0x1E65, "V"),
+ (0x1E66, "M", "ṧ"),
+ (0x1E67, "V"),
+ (0x1E68, "M", "ṩ"),
+ (0x1E69, "V"),
+ (0x1E6A, "M", "ṫ"),
+ (0x1E6B, "V"),
+ (0x1E6C, "M", "ṭ"),
+ (0x1E6D, "V"),
+ (0x1E6E, "M", "ṯ"),
+ (0x1E6F, "V"),
+ (0x1E70, "M", "ṱ"),
+ (0x1E71, "V"),
+ (0x1E72, "M", "ṳ"),
+ (0x1E73, "V"),
+ (0x1E74, "M", "ṵ"),
+ (0x1E75, "V"),
+ (0x1E76, "M", "ṷ"),
+ (0x1E77, "V"),
+ (0x1E78, "M", "ṹ"),
+ (0x1E79, "V"),
+ (0x1E7A, "M", "ṻ"),
+ (0x1E7B, "V"),
+ (0x1E7C, "M", "ṽ"),
+ (0x1E7D, "V"),
+ (0x1E7E, "M", "ṿ"),
+ (0x1E7F, "V"),
+ (0x1E80, "M", "ẁ"),
+ (0x1E81, "V"),
+ (0x1E82, "M", "ẃ"),
+ (0x1E83, "V"),
+ (0x1E84, "M", "ẅ"),
+ (0x1E85, "V"),
+ (0x1E86, "M", "ẇ"),
+ (0x1E87, "V"),
+ ]
+
+
+def _seg_18() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1E88, "M", "ẉ"),
+ (0x1E89, "V"),
+ (0x1E8A, "M", "ẋ"),
+ (0x1E8B, "V"),
+ (0x1E8C, "M", "ẍ"),
+ (0x1E8D, "V"),
+ (0x1E8E, "M", "ẏ"),
+ (0x1E8F, "V"),
+ (0x1E90, "M", "ẑ"),
+ (0x1E91, "V"),
+ (0x1E92, "M", "ẓ"),
+ (0x1E93, "V"),
+ (0x1E94, "M", "ẕ"),
+ (0x1E95, "V"),
+ (0x1E9A, "M", "aʾ"),
+ (0x1E9B, "M", "ṡ"),
+ (0x1E9C, "V"),
+ (0x1E9E, "M", "ß"),
+ (0x1E9F, "V"),
+ (0x1EA0, "M", "ạ"),
+ (0x1EA1, "V"),
+ (0x1EA2, "M", "ả"),
+ (0x1EA3, "V"),
+ (0x1EA4, "M", "ấ"),
+ (0x1EA5, "V"),
+ (0x1EA6, "M", "ầ"),
+ (0x1EA7, "V"),
+ (0x1EA8, "M", "ẩ"),
+ (0x1EA9, "V"),
+ (0x1EAA, "M", "ẫ"),
+ (0x1EAB, "V"),
+ (0x1EAC, "M", "ậ"),
+ (0x1EAD, "V"),
+ (0x1EAE, "M", "ắ"),
+ (0x1EAF, "V"),
+ (0x1EB0, "M", "ằ"),
+ (0x1EB1, "V"),
+ (0x1EB2, "M", "ẳ"),
+ (0x1EB3, "V"),
+ (0x1EB4, "M", "ẵ"),
+ (0x1EB5, "V"),
+ (0x1EB6, "M", "ặ"),
+ (0x1EB7, "V"),
+ (0x1EB8, "M", "ẹ"),
+ (0x1EB9, "V"),
+ (0x1EBA, "M", "ẻ"),
+ (0x1EBB, "V"),
+ (0x1EBC, "M", "ẽ"),
+ (0x1EBD, "V"),
+ (0x1EBE, "M", "ế"),
+ (0x1EBF, "V"),
+ (0x1EC0, "M", "ề"),
+ (0x1EC1, "V"),
+ (0x1EC2, "M", "ể"),
+ (0x1EC3, "V"),
+ (0x1EC4, "M", "ễ"),
+ (0x1EC5, "V"),
+ (0x1EC6, "M", "ệ"),
+ (0x1EC7, "V"),
+ (0x1EC8, "M", "ỉ"),
+ (0x1EC9, "V"),
+ (0x1ECA, "M", "ị"),
+ (0x1ECB, "V"),
+ (0x1ECC, "M", "ọ"),
+ (0x1ECD, "V"),
+ (0x1ECE, "M", "ỏ"),
+ (0x1ECF, "V"),
+ (0x1ED0, "M", "ố"),
+ (0x1ED1, "V"),
+ (0x1ED2, "M", "ồ"),
+ (0x1ED3, "V"),
+ (0x1ED4, "M", "ổ"),
+ (0x1ED5, "V"),
+ (0x1ED6, "M", "ỗ"),
+ (0x1ED7, "V"),
+ (0x1ED8, "M", "ộ"),
+ (0x1ED9, "V"),
+ (0x1EDA, "M", "ớ"),
+ (0x1EDB, "V"),
+ (0x1EDC, "M", "ờ"),
+ (0x1EDD, "V"),
+ (0x1EDE, "M", "ở"),
+ (0x1EDF, "V"),
+ (0x1EE0, "M", "ỡ"),
+ (0x1EE1, "V"),
+ (0x1EE2, "M", "ợ"),
+ (0x1EE3, "V"),
+ (0x1EE4, "M", "ụ"),
+ (0x1EE5, "V"),
+ (0x1EE6, "M", "ủ"),
+ (0x1EE7, "V"),
+ (0x1EE8, "M", "ứ"),
+ (0x1EE9, "V"),
+ (0x1EEA, "M", "ừ"),
+ (0x1EEB, "V"),
+ (0x1EEC, "M", "ử"),
+ (0x1EED, "V"),
+ (0x1EEE, "M", "ữ"),
+ (0x1EEF, "V"),
+ (0x1EF0, "M", "ự"),
+ ]
+
+
+def _seg_19() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1EF1, "V"),
+ (0x1EF2, "M", "ỳ"),
+ (0x1EF3, "V"),
+ (0x1EF4, "M", "ỵ"),
+ (0x1EF5, "V"),
+ (0x1EF6, "M", "ỷ"),
+ (0x1EF7, "V"),
+ (0x1EF8, "M", "ỹ"),
+ (0x1EF9, "V"),
+ (0x1EFA, "M", "ỻ"),
+ (0x1EFB, "V"),
+ (0x1EFC, "M", "ỽ"),
+ (0x1EFD, "V"),
+ (0x1EFE, "M", "ỿ"),
+ (0x1EFF, "V"),
+ (0x1F08, "M", "ἀ"),
+ (0x1F09, "M", "ἁ"),
+ (0x1F0A, "M", "ἂ"),
+ (0x1F0B, "M", "ἃ"),
+ (0x1F0C, "M", "ἄ"),
+ (0x1F0D, "M", "ἅ"),
+ (0x1F0E, "M", "ἆ"),
+ (0x1F0F, "M", "ἇ"),
+ (0x1F10, "V"),
+ (0x1F16, "X"),
+ (0x1F18, "M", "ἐ"),
+ (0x1F19, "M", "ἑ"),
+ (0x1F1A, "M", "ἒ"),
+ (0x1F1B, "M", "ἓ"),
+ (0x1F1C, "M", "ἔ"),
+ (0x1F1D, "M", "ἕ"),
+ (0x1F1E, "X"),
+ (0x1F20, "V"),
+ (0x1F28, "M", "ἠ"),
+ (0x1F29, "M", "ἡ"),
+ (0x1F2A, "M", "ἢ"),
+ (0x1F2B, "M", "ἣ"),
+ (0x1F2C, "M", "ἤ"),
+ (0x1F2D, "M", "ἥ"),
+ (0x1F2E, "M", "ἦ"),
+ (0x1F2F, "M", "ἧ"),
+ (0x1F30, "V"),
+ (0x1F38, "M", "ἰ"),
+ (0x1F39, "M", "ἱ"),
+ (0x1F3A, "M", "ἲ"),
+ (0x1F3B, "M", "ἳ"),
+ (0x1F3C, "M", "ἴ"),
+ (0x1F3D, "M", "ἵ"),
+ (0x1F3E, "M", "ἶ"),
+ (0x1F3F, "M", "ἷ"),
+ (0x1F40, "V"),
+ (0x1F46, "X"),
+ (0x1F48, "M", "ὀ"),
+ (0x1F49, "M", "ὁ"),
+ (0x1F4A, "M", "ὂ"),
+ (0x1F4B, "M", "ὃ"),
+ (0x1F4C, "M", "ὄ"),
+ (0x1F4D, "M", "ὅ"),
+ (0x1F4E, "X"),
+ (0x1F50, "V"),
+ (0x1F58, "X"),
+ (0x1F59, "M", "ὑ"),
+ (0x1F5A, "X"),
+ (0x1F5B, "M", "ὓ"),
+ (0x1F5C, "X"),
+ (0x1F5D, "M", "ὕ"),
+ (0x1F5E, "X"),
+ (0x1F5F, "M", "ὗ"),
+ (0x1F60, "V"),
+ (0x1F68, "M", "ὠ"),
+ (0x1F69, "M", "ὡ"),
+ (0x1F6A, "M", "ὢ"),
+ (0x1F6B, "M", "ὣ"),
+ (0x1F6C, "M", "ὤ"),
+ (0x1F6D, "M", "ὥ"),
+ (0x1F6E, "M", "ὦ"),
+ (0x1F6F, "M", "ὧ"),
+ (0x1F70, "V"),
+ (0x1F71, "M", "ά"),
+ (0x1F72, "V"),
+ (0x1F73, "M", "έ"),
+ (0x1F74, "V"),
+ (0x1F75, "M", "ή"),
+ (0x1F76, "V"),
+ (0x1F77, "M", "ί"),
+ (0x1F78, "V"),
+ (0x1F79, "M", "ό"),
+ (0x1F7A, "V"),
+ (0x1F7B, "M", "ύ"),
+ (0x1F7C, "V"),
+ (0x1F7D, "M", "ώ"),
+ (0x1F7E, "X"),
+ (0x1F80, "M", "ἀι"),
+ (0x1F81, "M", "ἁι"),
+ (0x1F82, "M", "ἂι"),
+ (0x1F83, "M", "ἃι"),
+ (0x1F84, "M", "ἄι"),
+ (0x1F85, "M", "ἅι"),
+ (0x1F86, "M", "ἆι"),
+ (0x1F87, "M", "ἇι"),
+ ]
+
+
+def _seg_20() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1F88, "M", "ἀι"),
+ (0x1F89, "M", "ἁι"),
+ (0x1F8A, "M", "ἂι"),
+ (0x1F8B, "M", "ἃι"),
+ (0x1F8C, "M", "ἄι"),
+ (0x1F8D, "M", "ἅι"),
+ (0x1F8E, "M", "ἆι"),
+ (0x1F8F, "M", "ἇι"),
+ (0x1F90, "M", "ἠι"),
+ (0x1F91, "M", "ἡι"),
+ (0x1F92, "M", "ἢι"),
+ (0x1F93, "M", "ἣι"),
+ (0x1F94, "M", "ἤι"),
+ (0x1F95, "M", "ἥι"),
+ (0x1F96, "M", "ἦι"),
+ (0x1F97, "M", "ἧι"),
+ (0x1F98, "M", "ἠι"),
+ (0x1F99, "M", "ἡι"),
+ (0x1F9A, "M", "ἢι"),
+ (0x1F9B, "M", "ἣι"),
+ (0x1F9C, "M", "ἤι"),
+ (0x1F9D, "M", "ἥι"),
+ (0x1F9E, "M", "ἦι"),
+ (0x1F9F, "M", "ἧι"),
+ (0x1FA0, "M", "ὠι"),
+ (0x1FA1, "M", "ὡι"),
+ (0x1FA2, "M", "ὢι"),
+ (0x1FA3, "M", "ὣι"),
+ (0x1FA4, "M", "ὤι"),
+ (0x1FA5, "M", "ὥι"),
+ (0x1FA6, "M", "ὦι"),
+ (0x1FA7, "M", "ὧι"),
+ (0x1FA8, "M", "ὠι"),
+ (0x1FA9, "M", "ὡι"),
+ (0x1FAA, "M", "ὢι"),
+ (0x1FAB, "M", "ὣι"),
+ (0x1FAC, "M", "ὤι"),
+ (0x1FAD, "M", "ὥι"),
+ (0x1FAE, "M", "ὦι"),
+ (0x1FAF, "M", "ὧι"),
+ (0x1FB0, "V"),
+ (0x1FB2, "M", "ὰι"),
+ (0x1FB3, "M", "αι"),
+ (0x1FB4, "M", "άι"),
+ (0x1FB5, "X"),
+ (0x1FB6, "V"),
+ (0x1FB7, "M", "ᾶι"),
+ (0x1FB8, "M", "ᾰ"),
+ (0x1FB9, "M", "ᾱ"),
+ (0x1FBA, "M", "ὰ"),
+ (0x1FBB, "M", "ά"),
+ (0x1FBC, "M", "αι"),
+ (0x1FBD, "3", " ̓"),
+ (0x1FBE, "M", "ι"),
+ (0x1FBF, "3", " ̓"),
+ (0x1FC0, "3", " ͂"),
+ (0x1FC1, "3", " ̈͂"),
+ (0x1FC2, "M", "ὴι"),
+ (0x1FC3, "M", "ηι"),
+ (0x1FC4, "M", "ήι"),
+ (0x1FC5, "X"),
+ (0x1FC6, "V"),
+ (0x1FC7, "M", "ῆι"),
+ (0x1FC8, "M", "ὲ"),
+ (0x1FC9, "M", "έ"),
+ (0x1FCA, "M", "ὴ"),
+ (0x1FCB, "M", "ή"),
+ (0x1FCC, "M", "ηι"),
+ (0x1FCD, "3", " ̓̀"),
+ (0x1FCE, "3", " ̓́"),
+ (0x1FCF, "3", " ̓͂"),
+ (0x1FD0, "V"),
+ (0x1FD3, "M", "ΐ"),
+ (0x1FD4, "X"),
+ (0x1FD6, "V"),
+ (0x1FD8, "M", "ῐ"),
+ (0x1FD9, "M", "ῑ"),
+ (0x1FDA, "M", "ὶ"),
+ (0x1FDB, "M", "ί"),
+ (0x1FDC, "X"),
+ (0x1FDD, "3", " ̔̀"),
+ (0x1FDE, "3", " ̔́"),
+ (0x1FDF, "3", " ̔͂"),
+ (0x1FE0, "V"),
+ (0x1FE3, "M", "ΰ"),
+ (0x1FE4, "V"),
+ (0x1FE8, "M", "ῠ"),
+ (0x1FE9, "M", "ῡ"),
+ (0x1FEA, "M", "ὺ"),
+ (0x1FEB, "M", "ύ"),
+ (0x1FEC, "M", "ῥ"),
+ (0x1FED, "3", " ̈̀"),
+ (0x1FEE, "3", " ̈́"),
+ (0x1FEF, "3", "`"),
+ (0x1FF0, "X"),
+ (0x1FF2, "M", "ὼι"),
+ (0x1FF3, "M", "ωι"),
+ (0x1FF4, "M", "ώι"),
+ (0x1FF5, "X"),
+ (0x1FF6, "V"),
+ ]
+
+
+def _seg_21() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1FF7, "M", "ῶι"),
+ (0x1FF8, "M", "ὸ"),
+ (0x1FF9, "M", "ό"),
+ (0x1FFA, "M", "ὼ"),
+ (0x1FFB, "M", "ώ"),
+ (0x1FFC, "M", "ωι"),
+ (0x1FFD, "3", " ́"),
+ (0x1FFE, "3", " ̔"),
+ (0x1FFF, "X"),
+ (0x2000, "3", " "),
+ (0x200B, "I"),
+ (0x200C, "D", ""),
+ (0x200E, "X"),
+ (0x2010, "V"),
+ (0x2011, "M", "‐"),
+ (0x2012, "V"),
+ (0x2017, "3", " ̳"),
+ (0x2018, "V"),
+ (0x2024, "X"),
+ (0x2027, "V"),
+ (0x2028, "X"),
+ (0x202F, "3", " "),
+ (0x2030, "V"),
+ (0x2033, "M", "′′"),
+ (0x2034, "M", "′′′"),
+ (0x2035, "V"),
+ (0x2036, "M", "‵‵"),
+ (0x2037, "M", "‵‵‵"),
+ (0x2038, "V"),
+ (0x203C, "3", "!!"),
+ (0x203D, "V"),
+ (0x203E, "3", " ̅"),
+ (0x203F, "V"),
+ (0x2047, "3", "??"),
+ (0x2048, "3", "?!"),
+ (0x2049, "3", "!?"),
+ (0x204A, "V"),
+ (0x2057, "M", "′′′′"),
+ (0x2058, "V"),
+ (0x205F, "3", " "),
+ (0x2060, "I"),
+ (0x2061, "X"),
+ (0x2064, "I"),
+ (0x2065, "X"),
+ (0x2070, "M", "0"),
+ (0x2071, "M", "i"),
+ (0x2072, "X"),
+ (0x2074, "M", "4"),
+ (0x2075, "M", "5"),
+ (0x2076, "M", "6"),
+ (0x2077, "M", "7"),
+ (0x2078, "M", "8"),
+ (0x2079, "M", "9"),
+ (0x207A, "3", "+"),
+ (0x207B, "M", "−"),
+ (0x207C, "3", "="),
+ (0x207D, "3", "("),
+ (0x207E, "3", ")"),
+ (0x207F, "M", "n"),
+ (0x2080, "M", "0"),
+ (0x2081, "M", "1"),
+ (0x2082, "M", "2"),
+ (0x2083, "M", "3"),
+ (0x2084, "M", "4"),
+ (0x2085, "M", "5"),
+ (0x2086, "M", "6"),
+ (0x2087, "M", "7"),
+ (0x2088, "M", "8"),
+ (0x2089, "M", "9"),
+ (0x208A, "3", "+"),
+ (0x208B, "M", "−"),
+ (0x208C, "3", "="),
+ (0x208D, "3", "("),
+ (0x208E, "3", ")"),
+ (0x208F, "X"),
+ (0x2090, "M", "a"),
+ (0x2091, "M", "e"),
+ (0x2092, "M", "o"),
+ (0x2093, "M", "x"),
+ (0x2094, "M", "ə"),
+ (0x2095, "M", "h"),
+ (0x2096, "M", "k"),
+ (0x2097, "M", "l"),
+ (0x2098, "M", "m"),
+ (0x2099, "M", "n"),
+ (0x209A, "M", "p"),
+ (0x209B, "M", "s"),
+ (0x209C, "M", "t"),
+ (0x209D, "X"),
+ (0x20A0, "V"),
+ (0x20A8, "M", "rs"),
+ (0x20A9, "V"),
+ (0x20C1, "X"),
+ (0x20D0, "V"),
+ (0x20F1, "X"),
+ (0x2100, "3", "a/c"),
+ (0x2101, "3", "a/s"),
+ (0x2102, "M", "c"),
+ (0x2103, "M", "°c"),
+ (0x2104, "V"),
+ ]
+
+
+def _seg_22() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2105, "3", "c/o"),
+ (0x2106, "3", "c/u"),
+ (0x2107, "M", "ɛ"),
+ (0x2108, "V"),
+ (0x2109, "M", "°f"),
+ (0x210A, "M", "g"),
+ (0x210B, "M", "h"),
+ (0x210F, "M", "ħ"),
+ (0x2110, "M", "i"),
+ (0x2112, "M", "l"),
+ (0x2114, "V"),
+ (0x2115, "M", "n"),
+ (0x2116, "M", "no"),
+ (0x2117, "V"),
+ (0x2119, "M", "p"),
+ (0x211A, "M", "q"),
+ (0x211B, "M", "r"),
+ (0x211E, "V"),
+ (0x2120, "M", "sm"),
+ (0x2121, "M", "tel"),
+ (0x2122, "M", "tm"),
+ (0x2123, "V"),
+ (0x2124, "M", "z"),
+ (0x2125, "V"),
+ (0x2126, "M", "ω"),
+ (0x2127, "V"),
+ (0x2128, "M", "z"),
+ (0x2129, "V"),
+ (0x212A, "M", "k"),
+ (0x212B, "M", "å"),
+ (0x212C, "M", "b"),
+ (0x212D, "M", "c"),
+ (0x212E, "V"),
+ (0x212F, "M", "e"),
+ (0x2131, "M", "f"),
+ (0x2132, "X"),
+ (0x2133, "M", "m"),
+ (0x2134, "M", "o"),
+ (0x2135, "M", "א"),
+ (0x2136, "M", "ב"),
+ (0x2137, "M", "ג"),
+ (0x2138, "M", "ד"),
+ (0x2139, "M", "i"),
+ (0x213A, "V"),
+ (0x213B, "M", "fax"),
+ (0x213C, "M", "π"),
+ (0x213D, "M", "γ"),
+ (0x213F, "M", "π"),
+ (0x2140, "M", "∑"),
+ (0x2141, "V"),
+ (0x2145, "M", "d"),
+ (0x2147, "M", "e"),
+ (0x2148, "M", "i"),
+ (0x2149, "M", "j"),
+ (0x214A, "V"),
+ (0x2150, "M", "1⁄7"),
+ (0x2151, "M", "1⁄9"),
+ (0x2152, "M", "1⁄10"),
+ (0x2153, "M", "1⁄3"),
+ (0x2154, "M", "2⁄3"),
+ (0x2155, "M", "1⁄5"),
+ (0x2156, "M", "2⁄5"),
+ (0x2157, "M", "3⁄5"),
+ (0x2158, "M", "4⁄5"),
+ (0x2159, "M", "1⁄6"),
+ (0x215A, "M", "5⁄6"),
+ (0x215B, "M", "1⁄8"),
+ (0x215C, "M", "3⁄8"),
+ (0x215D, "M", "5⁄8"),
+ (0x215E, "M", "7⁄8"),
+ (0x215F, "M", "1⁄"),
+ (0x2160, "M", "i"),
+ (0x2161, "M", "ii"),
+ (0x2162, "M", "iii"),
+ (0x2163, "M", "iv"),
+ (0x2164, "M", "v"),
+ (0x2165, "M", "vi"),
+ (0x2166, "M", "vii"),
+ (0x2167, "M", "viii"),
+ (0x2168, "M", "ix"),
+ (0x2169, "M", "x"),
+ (0x216A, "M", "xi"),
+ (0x216B, "M", "xii"),
+ (0x216C, "M", "l"),
+ (0x216D, "M", "c"),
+ (0x216E, "M", "d"),
+ (0x216F, "M", "m"),
+ (0x2170, "M", "i"),
+ (0x2171, "M", "ii"),
+ (0x2172, "M", "iii"),
+ (0x2173, "M", "iv"),
+ (0x2174, "M", "v"),
+ (0x2175, "M", "vi"),
+ (0x2176, "M", "vii"),
+ (0x2177, "M", "viii"),
+ (0x2178, "M", "ix"),
+ (0x2179, "M", "x"),
+ (0x217A, "M", "xi"),
+ (0x217B, "M", "xii"),
+ (0x217C, "M", "l"),
+ ]
+
+
+def _seg_23() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x217D, "M", "c"),
+ (0x217E, "M", "d"),
+ (0x217F, "M", "m"),
+ (0x2180, "V"),
+ (0x2183, "X"),
+ (0x2184, "V"),
+ (0x2189, "M", "0⁄3"),
+ (0x218A, "V"),
+ (0x218C, "X"),
+ (0x2190, "V"),
+ (0x222C, "M", "∫∫"),
+ (0x222D, "M", "∫∫∫"),
+ (0x222E, "V"),
+ (0x222F, "M", "∮∮"),
+ (0x2230, "M", "∮∮∮"),
+ (0x2231, "V"),
+ (0x2329, "M", "〈"),
+ (0x232A, "M", "〉"),
+ (0x232B, "V"),
+ (0x2427, "X"),
+ (0x2440, "V"),
+ (0x244B, "X"),
+ (0x2460, "M", "1"),
+ (0x2461, "M", "2"),
+ (0x2462, "M", "3"),
+ (0x2463, "M", "4"),
+ (0x2464, "M", "5"),
+ (0x2465, "M", "6"),
+ (0x2466, "M", "7"),
+ (0x2467, "M", "8"),
+ (0x2468, "M", "9"),
+ (0x2469, "M", "10"),
+ (0x246A, "M", "11"),
+ (0x246B, "M", "12"),
+ (0x246C, "M", "13"),
+ (0x246D, "M", "14"),
+ (0x246E, "M", "15"),
+ (0x246F, "M", "16"),
+ (0x2470, "M", "17"),
+ (0x2471, "M", "18"),
+ (0x2472, "M", "19"),
+ (0x2473, "M", "20"),
+ (0x2474, "3", "(1)"),
+ (0x2475, "3", "(2)"),
+ (0x2476, "3", "(3)"),
+ (0x2477, "3", "(4)"),
+ (0x2478, "3", "(5)"),
+ (0x2479, "3", "(6)"),
+ (0x247A, "3", "(7)"),
+ (0x247B, "3", "(8)"),
+ (0x247C, "3", "(9)"),
+ (0x247D, "3", "(10)"),
+ (0x247E, "3", "(11)"),
+ (0x247F, "3", "(12)"),
+ (0x2480, "3", "(13)"),
+ (0x2481, "3", "(14)"),
+ (0x2482, "3", "(15)"),
+ (0x2483, "3", "(16)"),
+ (0x2484, "3", "(17)"),
+ (0x2485, "3", "(18)"),
+ (0x2486, "3", "(19)"),
+ (0x2487, "3", "(20)"),
+ (0x2488, "X"),
+ (0x249C, "3", "(a)"),
+ (0x249D, "3", "(b)"),
+ (0x249E, "3", "(c)"),
+ (0x249F, "3", "(d)"),
+ (0x24A0, "3", "(e)"),
+ (0x24A1, "3", "(f)"),
+ (0x24A2, "3", "(g)"),
+ (0x24A3, "3", "(h)"),
+ (0x24A4, "3", "(i)"),
+ (0x24A5, "3", "(j)"),
+ (0x24A6, "3", "(k)"),
+ (0x24A7, "3", "(l)"),
+ (0x24A8, "3", "(m)"),
+ (0x24A9, "3", "(n)"),
+ (0x24AA, "3", "(o)"),
+ (0x24AB, "3", "(p)"),
+ (0x24AC, "3", "(q)"),
+ (0x24AD, "3", "(r)"),
+ (0x24AE, "3", "(s)"),
+ (0x24AF, "3", "(t)"),
+ (0x24B0, "3", "(u)"),
+ (0x24B1, "3", "(v)"),
+ (0x24B2, "3", "(w)"),
+ (0x24B3, "3", "(x)"),
+ (0x24B4, "3", "(y)"),
+ (0x24B5, "3", "(z)"),
+ (0x24B6, "M", "a"),
+ (0x24B7, "M", "b"),
+ (0x24B8, "M", "c"),
+ (0x24B9, "M", "d"),
+ (0x24BA, "M", "e"),
+ (0x24BB, "M", "f"),
+ (0x24BC, "M", "g"),
+ (0x24BD, "M", "h"),
+ (0x24BE, "M", "i"),
+ (0x24BF, "M", "j"),
+ (0x24C0, "M", "k"),
+ ]
+
+
+def _seg_24() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x24C1, "M", "l"),
+ (0x24C2, "M", "m"),
+ (0x24C3, "M", "n"),
+ (0x24C4, "M", "o"),
+ (0x24C5, "M", "p"),
+ (0x24C6, "M", "q"),
+ (0x24C7, "M", "r"),
+ (0x24C8, "M", "s"),
+ (0x24C9, "M", "t"),
+ (0x24CA, "M", "u"),
+ (0x24CB, "M", "v"),
+ (0x24CC, "M", "w"),
+ (0x24CD, "M", "x"),
+ (0x24CE, "M", "y"),
+ (0x24CF, "M", "z"),
+ (0x24D0, "M", "a"),
+ (0x24D1, "M", "b"),
+ (0x24D2, "M", "c"),
+ (0x24D3, "M", "d"),
+ (0x24D4, "M", "e"),
+ (0x24D5, "M", "f"),
+ (0x24D6, "M", "g"),
+ (0x24D7, "M", "h"),
+ (0x24D8, "M", "i"),
+ (0x24D9, "M", "j"),
+ (0x24DA, "M", "k"),
+ (0x24DB, "M", "l"),
+ (0x24DC, "M", "m"),
+ (0x24DD, "M", "n"),
+ (0x24DE, "M", "o"),
+ (0x24DF, "M", "p"),
+ (0x24E0, "M", "q"),
+ (0x24E1, "M", "r"),
+ (0x24E2, "M", "s"),
+ (0x24E3, "M", "t"),
+ (0x24E4, "M", "u"),
+ (0x24E5, "M", "v"),
+ (0x24E6, "M", "w"),
+ (0x24E7, "M", "x"),
+ (0x24E8, "M", "y"),
+ (0x24E9, "M", "z"),
+ (0x24EA, "M", "0"),
+ (0x24EB, "V"),
+ (0x2A0C, "M", "∫∫∫∫"),
+ (0x2A0D, "V"),
+ (0x2A74, "3", "::="),
+ (0x2A75, "3", "=="),
+ (0x2A76, "3", "==="),
+ (0x2A77, "V"),
+ (0x2ADC, "M", "⫝̸"),
+ (0x2ADD, "V"),
+ (0x2B74, "X"),
+ (0x2B76, "V"),
+ (0x2B96, "X"),
+ (0x2B97, "V"),
+ (0x2C00, "M", "ⰰ"),
+ (0x2C01, "M", "ⰱ"),
+ (0x2C02, "M", "ⰲ"),
+ (0x2C03, "M", "ⰳ"),
+ (0x2C04, "M", "ⰴ"),
+ (0x2C05, "M", "ⰵ"),
+ (0x2C06, "M", "ⰶ"),
+ (0x2C07, "M", "ⰷ"),
+ (0x2C08, "M", "ⰸ"),
+ (0x2C09, "M", "ⰹ"),
+ (0x2C0A, "M", "ⰺ"),
+ (0x2C0B, "M", "ⰻ"),
+ (0x2C0C, "M", "ⰼ"),
+ (0x2C0D, "M", "ⰽ"),
+ (0x2C0E, "M", "ⰾ"),
+ (0x2C0F, "M", "ⰿ"),
+ (0x2C10, "M", "ⱀ"),
+ (0x2C11, "M", "ⱁ"),
+ (0x2C12, "M", "ⱂ"),
+ (0x2C13, "M", "ⱃ"),
+ (0x2C14, "M", "ⱄ"),
+ (0x2C15, "M", "ⱅ"),
+ (0x2C16, "M", "ⱆ"),
+ (0x2C17, "M", "ⱇ"),
+ (0x2C18, "M", "ⱈ"),
+ (0x2C19, "M", "ⱉ"),
+ (0x2C1A, "M", "ⱊ"),
+ (0x2C1B, "M", "ⱋ"),
+ (0x2C1C, "M", "ⱌ"),
+ (0x2C1D, "M", "ⱍ"),
+ (0x2C1E, "M", "ⱎ"),
+ (0x2C1F, "M", "ⱏ"),
+ (0x2C20, "M", "ⱐ"),
+ (0x2C21, "M", "ⱑ"),
+ (0x2C22, "M", "ⱒ"),
+ (0x2C23, "M", "ⱓ"),
+ (0x2C24, "M", "ⱔ"),
+ (0x2C25, "M", "ⱕ"),
+ (0x2C26, "M", "ⱖ"),
+ (0x2C27, "M", "ⱗ"),
+ (0x2C28, "M", "ⱘ"),
+ (0x2C29, "M", "ⱙ"),
+ (0x2C2A, "M", "ⱚ"),
+ (0x2C2B, "M", "ⱛ"),
+ (0x2C2C, "M", "ⱜ"),
+ ]
+
+
+def _seg_25() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2C2D, "M", "ⱝ"),
+ (0x2C2E, "M", "ⱞ"),
+ (0x2C2F, "M", "ⱟ"),
+ (0x2C30, "V"),
+ (0x2C60, "M", "ⱡ"),
+ (0x2C61, "V"),
+ (0x2C62, "M", "ɫ"),
+ (0x2C63, "M", "ᵽ"),
+ (0x2C64, "M", "ɽ"),
+ (0x2C65, "V"),
+ (0x2C67, "M", "ⱨ"),
+ (0x2C68, "V"),
+ (0x2C69, "M", "ⱪ"),
+ (0x2C6A, "V"),
+ (0x2C6B, "M", "ⱬ"),
+ (0x2C6C, "V"),
+ (0x2C6D, "M", "ɑ"),
+ (0x2C6E, "M", "ɱ"),
+ (0x2C6F, "M", "ɐ"),
+ (0x2C70, "M", "ɒ"),
+ (0x2C71, "V"),
+ (0x2C72, "M", "ⱳ"),
+ (0x2C73, "V"),
+ (0x2C75, "M", "ⱶ"),
+ (0x2C76, "V"),
+ (0x2C7C, "M", "j"),
+ (0x2C7D, "M", "v"),
+ (0x2C7E, "M", "ȿ"),
+ (0x2C7F, "M", "ɀ"),
+ (0x2C80, "M", "ⲁ"),
+ (0x2C81, "V"),
+ (0x2C82, "M", "ⲃ"),
+ (0x2C83, "V"),
+ (0x2C84, "M", "ⲅ"),
+ (0x2C85, "V"),
+ (0x2C86, "M", "ⲇ"),
+ (0x2C87, "V"),
+ (0x2C88, "M", "ⲉ"),
+ (0x2C89, "V"),
+ (0x2C8A, "M", "ⲋ"),
+ (0x2C8B, "V"),
+ (0x2C8C, "M", "ⲍ"),
+ (0x2C8D, "V"),
+ (0x2C8E, "M", "ⲏ"),
+ (0x2C8F, "V"),
+ (0x2C90, "M", "ⲑ"),
+ (0x2C91, "V"),
+ (0x2C92, "M", "ⲓ"),
+ (0x2C93, "V"),
+ (0x2C94, "M", "ⲕ"),
+ (0x2C95, "V"),
+ (0x2C96, "M", "ⲗ"),
+ (0x2C97, "V"),
+ (0x2C98, "M", "ⲙ"),
+ (0x2C99, "V"),
+ (0x2C9A, "M", "ⲛ"),
+ (0x2C9B, "V"),
+ (0x2C9C, "M", "ⲝ"),
+ (0x2C9D, "V"),
+ (0x2C9E, "M", "ⲟ"),
+ (0x2C9F, "V"),
+ (0x2CA0, "M", "ⲡ"),
+ (0x2CA1, "V"),
+ (0x2CA2, "M", "ⲣ"),
+ (0x2CA3, "V"),
+ (0x2CA4, "M", "ⲥ"),
+ (0x2CA5, "V"),
+ (0x2CA6, "M", "ⲧ"),
+ (0x2CA7, "V"),
+ (0x2CA8, "M", "ⲩ"),
+ (0x2CA9, "V"),
+ (0x2CAA, "M", "ⲫ"),
+ (0x2CAB, "V"),
+ (0x2CAC, "M", "ⲭ"),
+ (0x2CAD, "V"),
+ (0x2CAE, "M", "ⲯ"),
+ (0x2CAF, "V"),
+ (0x2CB0, "M", "ⲱ"),
+ (0x2CB1, "V"),
+ (0x2CB2, "M", "ⲳ"),
+ (0x2CB3, "V"),
+ (0x2CB4, "M", "ⲵ"),
+ (0x2CB5, "V"),
+ (0x2CB6, "M", "ⲷ"),
+ (0x2CB7, "V"),
+ (0x2CB8, "M", "ⲹ"),
+ (0x2CB9, "V"),
+ (0x2CBA, "M", "ⲻ"),
+ (0x2CBB, "V"),
+ (0x2CBC, "M", "ⲽ"),
+ (0x2CBD, "V"),
+ (0x2CBE, "M", "ⲿ"),
+ (0x2CBF, "V"),
+ (0x2CC0, "M", "ⳁ"),
+ (0x2CC1, "V"),
+ (0x2CC2, "M", "ⳃ"),
+ (0x2CC3, "V"),
+ (0x2CC4, "M", "ⳅ"),
+ (0x2CC5, "V"),
+ (0x2CC6, "M", "ⳇ"),
+ ]
+
+
+def _seg_26() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2CC7, "V"),
+ (0x2CC8, "M", "ⳉ"),
+ (0x2CC9, "V"),
+ (0x2CCA, "M", "ⳋ"),
+ (0x2CCB, "V"),
+ (0x2CCC, "M", "ⳍ"),
+ (0x2CCD, "V"),
+ (0x2CCE, "M", "ⳏ"),
+ (0x2CCF, "V"),
+ (0x2CD0, "M", "ⳑ"),
+ (0x2CD1, "V"),
+ (0x2CD2, "M", "ⳓ"),
+ (0x2CD3, "V"),
+ (0x2CD4, "M", "ⳕ"),
+ (0x2CD5, "V"),
+ (0x2CD6, "M", "ⳗ"),
+ (0x2CD7, "V"),
+ (0x2CD8, "M", "ⳙ"),
+ (0x2CD9, "V"),
+ (0x2CDA, "M", "ⳛ"),
+ (0x2CDB, "V"),
+ (0x2CDC, "M", "ⳝ"),
+ (0x2CDD, "V"),
+ (0x2CDE, "M", "ⳟ"),
+ (0x2CDF, "V"),
+ (0x2CE0, "M", "ⳡ"),
+ (0x2CE1, "V"),
+ (0x2CE2, "M", "ⳣ"),
+ (0x2CE3, "V"),
+ (0x2CEB, "M", "ⳬ"),
+ (0x2CEC, "V"),
+ (0x2CED, "M", "ⳮ"),
+ (0x2CEE, "V"),
+ (0x2CF2, "M", "ⳳ"),
+ (0x2CF3, "V"),
+ (0x2CF4, "X"),
+ (0x2CF9, "V"),
+ (0x2D26, "X"),
+ (0x2D27, "V"),
+ (0x2D28, "X"),
+ (0x2D2D, "V"),
+ (0x2D2E, "X"),
+ (0x2D30, "V"),
+ (0x2D68, "X"),
+ (0x2D6F, "M", "ⵡ"),
+ (0x2D70, "V"),
+ (0x2D71, "X"),
+ (0x2D7F, "V"),
+ (0x2D97, "X"),
+ (0x2DA0, "V"),
+ (0x2DA7, "X"),
+ (0x2DA8, "V"),
+ (0x2DAF, "X"),
+ (0x2DB0, "V"),
+ (0x2DB7, "X"),
+ (0x2DB8, "V"),
+ (0x2DBF, "X"),
+ (0x2DC0, "V"),
+ (0x2DC7, "X"),
+ (0x2DC8, "V"),
+ (0x2DCF, "X"),
+ (0x2DD0, "V"),
+ (0x2DD7, "X"),
+ (0x2DD8, "V"),
+ (0x2DDF, "X"),
+ (0x2DE0, "V"),
+ (0x2E5E, "X"),
+ (0x2E80, "V"),
+ (0x2E9A, "X"),
+ (0x2E9B, "V"),
+ (0x2E9F, "M", "母"),
+ (0x2EA0, "V"),
+ (0x2EF3, "M", "龟"),
+ (0x2EF4, "X"),
+ (0x2F00, "M", "一"),
+ (0x2F01, "M", "丨"),
+ (0x2F02, "M", "丶"),
+ (0x2F03, "M", "丿"),
+ (0x2F04, "M", "乙"),
+ (0x2F05, "M", "亅"),
+ (0x2F06, "M", "二"),
+ (0x2F07, "M", "亠"),
+ (0x2F08, "M", "人"),
+ (0x2F09, "M", "儿"),
+ (0x2F0A, "M", "入"),
+ (0x2F0B, "M", "八"),
+ (0x2F0C, "M", "冂"),
+ (0x2F0D, "M", "冖"),
+ (0x2F0E, "M", "冫"),
+ (0x2F0F, "M", "几"),
+ (0x2F10, "M", "凵"),
+ (0x2F11, "M", "刀"),
+ (0x2F12, "M", "力"),
+ (0x2F13, "M", "勹"),
+ (0x2F14, "M", "匕"),
+ (0x2F15, "M", "匚"),
+ (0x2F16, "M", "匸"),
+ (0x2F17, "M", "十"),
+ (0x2F18, "M", "卜"),
+ (0x2F19, "M", "卩"),
+ ]
+
+
+def _seg_27() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F1A, "M", "厂"),
+ (0x2F1B, "M", "厶"),
+ (0x2F1C, "M", "又"),
+ (0x2F1D, "M", "口"),
+ (0x2F1E, "M", "囗"),
+ (0x2F1F, "M", "土"),
+ (0x2F20, "M", "士"),
+ (0x2F21, "M", "夂"),
+ (0x2F22, "M", "夊"),
+ (0x2F23, "M", "夕"),
+ (0x2F24, "M", "大"),
+ (0x2F25, "M", "女"),
+ (0x2F26, "M", "子"),
+ (0x2F27, "M", "宀"),
+ (0x2F28, "M", "寸"),
+ (0x2F29, "M", "小"),
+ (0x2F2A, "M", "尢"),
+ (0x2F2B, "M", "尸"),
+ (0x2F2C, "M", "屮"),
+ (0x2F2D, "M", "山"),
+ (0x2F2E, "M", "巛"),
+ (0x2F2F, "M", "工"),
+ (0x2F30, "M", "己"),
+ (0x2F31, "M", "巾"),
+ (0x2F32, "M", "干"),
+ (0x2F33, "M", "幺"),
+ (0x2F34, "M", "广"),
+ (0x2F35, "M", "廴"),
+ (0x2F36, "M", "廾"),
+ (0x2F37, "M", "弋"),
+ (0x2F38, "M", "弓"),
+ (0x2F39, "M", "彐"),
+ (0x2F3A, "M", "彡"),
+ (0x2F3B, "M", "彳"),
+ (0x2F3C, "M", "心"),
+ (0x2F3D, "M", "戈"),
+ (0x2F3E, "M", "戶"),
+ (0x2F3F, "M", "手"),
+ (0x2F40, "M", "支"),
+ (0x2F41, "M", "攴"),
+ (0x2F42, "M", "文"),
+ (0x2F43, "M", "斗"),
+ (0x2F44, "M", "斤"),
+ (0x2F45, "M", "方"),
+ (0x2F46, "M", "无"),
+ (0x2F47, "M", "日"),
+ (0x2F48, "M", "曰"),
+ (0x2F49, "M", "月"),
+ (0x2F4A, "M", "木"),
+ (0x2F4B, "M", "欠"),
+ (0x2F4C, "M", "止"),
+ (0x2F4D, "M", "歹"),
+ (0x2F4E, "M", "殳"),
+ (0x2F4F, "M", "毋"),
+ (0x2F50, "M", "比"),
+ (0x2F51, "M", "毛"),
+ (0x2F52, "M", "氏"),
+ (0x2F53, "M", "气"),
+ (0x2F54, "M", "水"),
+ (0x2F55, "M", "火"),
+ (0x2F56, "M", "爪"),
+ (0x2F57, "M", "父"),
+ (0x2F58, "M", "爻"),
+ (0x2F59, "M", "爿"),
+ (0x2F5A, "M", "片"),
+ (0x2F5B, "M", "牙"),
+ (0x2F5C, "M", "牛"),
+ (0x2F5D, "M", "犬"),
+ (0x2F5E, "M", "玄"),
+ (0x2F5F, "M", "玉"),
+ (0x2F60, "M", "瓜"),
+ (0x2F61, "M", "瓦"),
+ (0x2F62, "M", "甘"),
+ (0x2F63, "M", "生"),
+ (0x2F64, "M", "用"),
+ (0x2F65, "M", "田"),
+ (0x2F66, "M", "疋"),
+ (0x2F67, "M", "疒"),
+ (0x2F68, "M", "癶"),
+ (0x2F69, "M", "白"),
+ (0x2F6A, "M", "皮"),
+ (0x2F6B, "M", "皿"),
+ (0x2F6C, "M", "目"),
+ (0x2F6D, "M", "矛"),
+ (0x2F6E, "M", "矢"),
+ (0x2F6F, "M", "石"),
+ (0x2F70, "M", "示"),
+ (0x2F71, "M", "禸"),
+ (0x2F72, "M", "禾"),
+ (0x2F73, "M", "穴"),
+ (0x2F74, "M", "立"),
+ (0x2F75, "M", "竹"),
+ (0x2F76, "M", "米"),
+ (0x2F77, "M", "糸"),
+ (0x2F78, "M", "缶"),
+ (0x2F79, "M", "网"),
+ (0x2F7A, "M", "羊"),
+ (0x2F7B, "M", "羽"),
+ (0x2F7C, "M", "老"),
+ (0x2F7D, "M", "而"),
+ ]
+
+
+def _seg_28() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F7E, "M", "耒"),
+ (0x2F7F, "M", "耳"),
+ (0x2F80, "M", "聿"),
+ (0x2F81, "M", "肉"),
+ (0x2F82, "M", "臣"),
+ (0x2F83, "M", "自"),
+ (0x2F84, "M", "至"),
+ (0x2F85, "M", "臼"),
+ (0x2F86, "M", "舌"),
+ (0x2F87, "M", "舛"),
+ (0x2F88, "M", "舟"),
+ (0x2F89, "M", "艮"),
+ (0x2F8A, "M", "色"),
+ (0x2F8B, "M", "艸"),
+ (0x2F8C, "M", "虍"),
+ (0x2F8D, "M", "虫"),
+ (0x2F8E, "M", "血"),
+ (0x2F8F, "M", "行"),
+ (0x2F90, "M", "衣"),
+ (0x2F91, "M", "襾"),
+ (0x2F92, "M", "見"),
+ (0x2F93, "M", "角"),
+ (0x2F94, "M", "言"),
+ (0x2F95, "M", "谷"),
+ (0x2F96, "M", "豆"),
+ (0x2F97, "M", "豕"),
+ (0x2F98, "M", "豸"),
+ (0x2F99, "M", "貝"),
+ (0x2F9A, "M", "赤"),
+ (0x2F9B, "M", "走"),
+ (0x2F9C, "M", "足"),
+ (0x2F9D, "M", "身"),
+ (0x2F9E, "M", "車"),
+ (0x2F9F, "M", "辛"),
+ (0x2FA0, "M", "辰"),
+ (0x2FA1, "M", "辵"),
+ (0x2FA2, "M", "邑"),
+ (0x2FA3, "M", "酉"),
+ (0x2FA4, "M", "釆"),
+ (0x2FA5, "M", "里"),
+ (0x2FA6, "M", "金"),
+ (0x2FA7, "M", "長"),
+ (0x2FA8, "M", "門"),
+ (0x2FA9, "M", "阜"),
+ (0x2FAA, "M", "隶"),
+ (0x2FAB, "M", "隹"),
+ (0x2FAC, "M", "雨"),
+ (0x2FAD, "M", "靑"),
+ (0x2FAE, "M", "非"),
+ (0x2FAF, "M", "面"),
+ (0x2FB0, "M", "革"),
+ (0x2FB1, "M", "韋"),
+ (0x2FB2, "M", "韭"),
+ (0x2FB3, "M", "音"),
+ (0x2FB4, "M", "頁"),
+ (0x2FB5, "M", "風"),
+ (0x2FB6, "M", "飛"),
+ (0x2FB7, "M", "食"),
+ (0x2FB8, "M", "首"),
+ (0x2FB9, "M", "香"),
+ (0x2FBA, "M", "馬"),
+ (0x2FBB, "M", "骨"),
+ (0x2FBC, "M", "高"),
+ (0x2FBD, "M", "髟"),
+ (0x2FBE, "M", "鬥"),
+ (0x2FBF, "M", "鬯"),
+ (0x2FC0, "M", "鬲"),
+ (0x2FC1, "M", "鬼"),
+ (0x2FC2, "M", "魚"),
+ (0x2FC3, "M", "鳥"),
+ (0x2FC4, "M", "鹵"),
+ (0x2FC5, "M", "鹿"),
+ (0x2FC6, "M", "麥"),
+ (0x2FC7, "M", "麻"),
+ (0x2FC8, "M", "黃"),
+ (0x2FC9, "M", "黍"),
+ (0x2FCA, "M", "黑"),
+ (0x2FCB, "M", "黹"),
+ (0x2FCC, "M", "黽"),
+ (0x2FCD, "M", "鼎"),
+ (0x2FCE, "M", "鼓"),
+ (0x2FCF, "M", "鼠"),
+ (0x2FD0, "M", "鼻"),
+ (0x2FD1, "M", "齊"),
+ (0x2FD2, "M", "齒"),
+ (0x2FD3, "M", "龍"),
+ (0x2FD4, "M", "龜"),
+ (0x2FD5, "M", "龠"),
+ (0x2FD6, "X"),
+ (0x3000, "3", " "),
+ (0x3001, "V"),
+ (0x3002, "M", "."),
+ (0x3003, "V"),
+ (0x3036, "M", "〒"),
+ (0x3037, "V"),
+ (0x3038, "M", "十"),
+ (0x3039, "M", "卄"),
+ (0x303A, "M", "卅"),
+ (0x303B, "V"),
+ (0x3040, "X"),
+ ]
+
+
+def _seg_29() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x3041, "V"),
+ (0x3097, "X"),
+ (0x3099, "V"),
+ (0x309B, "3", " ゙"),
+ (0x309C, "3", " ゚"),
+ (0x309D, "V"),
+ (0x309F, "M", "より"),
+ (0x30A0, "V"),
+ (0x30FF, "M", "コト"),
+ (0x3100, "X"),
+ (0x3105, "V"),
+ (0x3130, "X"),
+ (0x3131, "M", "ᄀ"),
+ (0x3132, "M", "ᄁ"),
+ (0x3133, "M", "ᆪ"),
+ (0x3134, "M", "ᄂ"),
+ (0x3135, "M", "ᆬ"),
+ (0x3136, "M", "ᆭ"),
+ (0x3137, "M", "ᄃ"),
+ (0x3138, "M", "ᄄ"),
+ (0x3139, "M", "ᄅ"),
+ (0x313A, "M", "ᆰ"),
+ (0x313B, "M", "ᆱ"),
+ (0x313C, "M", "ᆲ"),
+ (0x313D, "M", "ᆳ"),
+ (0x313E, "M", "ᆴ"),
+ (0x313F, "M", "ᆵ"),
+ (0x3140, "M", "ᄚ"),
+ (0x3141, "M", "ᄆ"),
+ (0x3142, "M", "ᄇ"),
+ (0x3143, "M", "ᄈ"),
+ (0x3144, "M", "ᄡ"),
+ (0x3145, "M", "ᄉ"),
+ (0x3146, "M", "ᄊ"),
+ (0x3147, "M", "ᄋ"),
+ (0x3148, "M", "ᄌ"),
+ (0x3149, "M", "ᄍ"),
+ (0x314A, "M", "ᄎ"),
+ (0x314B, "M", "ᄏ"),
+ (0x314C, "M", "ᄐ"),
+ (0x314D, "M", "ᄑ"),
+ (0x314E, "M", "ᄒ"),
+ (0x314F, "M", "ᅡ"),
+ (0x3150, "M", "ᅢ"),
+ (0x3151, "M", "ᅣ"),
+ (0x3152, "M", "ᅤ"),
+ (0x3153, "M", "ᅥ"),
+ (0x3154, "M", "ᅦ"),
+ (0x3155, "M", "ᅧ"),
+ (0x3156, "M", "ᅨ"),
+ (0x3157, "M", "ᅩ"),
+ (0x3158, "M", "ᅪ"),
+ (0x3159, "M", "ᅫ"),
+ (0x315A, "M", "ᅬ"),
+ (0x315B, "M", "ᅭ"),
+ (0x315C, "M", "ᅮ"),
+ (0x315D, "M", "ᅯ"),
+ (0x315E, "M", "ᅰ"),
+ (0x315F, "M", "ᅱ"),
+ (0x3160, "M", "ᅲ"),
+ (0x3161, "M", "ᅳ"),
+ (0x3162, "M", "ᅴ"),
+ (0x3163, "M", "ᅵ"),
+ (0x3164, "X"),
+ (0x3165, "M", "ᄔ"),
+ (0x3166, "M", "ᄕ"),
+ (0x3167, "M", "ᇇ"),
+ (0x3168, "M", "ᇈ"),
+ (0x3169, "M", "ᇌ"),
+ (0x316A, "M", "ᇎ"),
+ (0x316B, "M", "ᇓ"),
+ (0x316C, "M", "ᇗ"),
+ (0x316D, "M", "ᇙ"),
+ (0x316E, "M", "ᄜ"),
+ (0x316F, "M", "ᇝ"),
+ (0x3170, "M", "ᇟ"),
+ (0x3171, "M", "ᄝ"),
+ (0x3172, "M", "ᄞ"),
+ (0x3173, "M", "ᄠ"),
+ (0x3174, "M", "ᄢ"),
+ (0x3175, "M", "ᄣ"),
+ (0x3176, "M", "ᄧ"),
+ (0x3177, "M", "ᄩ"),
+ (0x3178, "M", "ᄫ"),
+ (0x3179, "M", "ᄬ"),
+ (0x317A, "M", "ᄭ"),
+ (0x317B, "M", "ᄮ"),
+ (0x317C, "M", "ᄯ"),
+ (0x317D, "M", "ᄲ"),
+ (0x317E, "M", "ᄶ"),
+ (0x317F, "M", "ᅀ"),
+ (0x3180, "M", "ᅇ"),
+ (0x3181, "M", "ᅌ"),
+ (0x3182, "M", "ᇱ"),
+ (0x3183, "M", "ᇲ"),
+ (0x3184, "M", "ᅗ"),
+ (0x3185, "M", "ᅘ"),
+ (0x3186, "M", "ᅙ"),
+ (0x3187, "M", "ᆄ"),
+ (0x3188, "M", "ᆅ"),
+ ]
+
+
+def _seg_30() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x3189, "M", "ᆈ"),
+ (0x318A, "M", "ᆑ"),
+ (0x318B, "M", "ᆒ"),
+ (0x318C, "M", "ᆔ"),
+ (0x318D, "M", "ᆞ"),
+ (0x318E, "M", "ᆡ"),
+ (0x318F, "X"),
+ (0x3190, "V"),
+ (0x3192, "M", "一"),
+ (0x3193, "M", "二"),
+ (0x3194, "M", "三"),
+ (0x3195, "M", "四"),
+ (0x3196, "M", "上"),
+ (0x3197, "M", "中"),
+ (0x3198, "M", "下"),
+ (0x3199, "M", "甲"),
+ (0x319A, "M", "乙"),
+ (0x319B, "M", "丙"),
+ (0x319C, "M", "丁"),
+ (0x319D, "M", "天"),
+ (0x319E, "M", "地"),
+ (0x319F, "M", "人"),
+ (0x31A0, "V"),
+ (0x31E4, "X"),
+ (0x31F0, "V"),
+ (0x3200, "3", "(ᄀ)"),
+ (0x3201, "3", "(ᄂ)"),
+ (0x3202, "3", "(ᄃ)"),
+ (0x3203, "3", "(ᄅ)"),
+ (0x3204, "3", "(ᄆ)"),
+ (0x3205, "3", "(ᄇ)"),
+ (0x3206, "3", "(ᄉ)"),
+ (0x3207, "3", "(ᄋ)"),
+ (0x3208, "3", "(ᄌ)"),
+ (0x3209, "3", "(ᄎ)"),
+ (0x320A, "3", "(ᄏ)"),
+ (0x320B, "3", "(ᄐ)"),
+ (0x320C, "3", "(ᄑ)"),
+ (0x320D, "3", "(ᄒ)"),
+ (0x320E, "3", "(가)"),
+ (0x320F, "3", "(나)"),
+ (0x3210, "3", "(다)"),
+ (0x3211, "3", "(라)"),
+ (0x3212, "3", "(마)"),
+ (0x3213, "3", "(바)"),
+ (0x3214, "3", "(사)"),
+ (0x3215, "3", "(아)"),
+ (0x3216, "3", "(자)"),
+ (0x3217, "3", "(차)"),
+ (0x3218, "3", "(카)"),
+ (0x3219, "3", "(타)"),
+ (0x321A, "3", "(파)"),
+ (0x321B, "3", "(하)"),
+ (0x321C, "3", "(주)"),
+ (0x321D, "3", "(오전)"),
+ (0x321E, "3", "(오후)"),
+ (0x321F, "X"),
+ (0x3220, "3", "(一)"),
+ (0x3221, "3", "(二)"),
+ (0x3222, "3", "(三)"),
+ (0x3223, "3", "(四)"),
+ (0x3224, "3", "(五)"),
+ (0x3225, "3", "(六)"),
+ (0x3226, "3", "(七)"),
+ (0x3227, "3", "(八)"),
+ (0x3228, "3", "(九)"),
+ (0x3229, "3", "(十)"),
+ (0x322A, "3", "(月)"),
+ (0x322B, "3", "(火)"),
+ (0x322C, "3", "(水)"),
+ (0x322D, "3", "(木)"),
+ (0x322E, "3", "(金)"),
+ (0x322F, "3", "(土)"),
+ (0x3230, "3", "(日)"),
+ (0x3231, "3", "(株)"),
+ (0x3232, "3", "(有)"),
+ (0x3233, "3", "(社)"),
+ (0x3234, "3", "(名)"),
+ (0x3235, "3", "(特)"),
+ (0x3236, "3", "(財)"),
+ (0x3237, "3", "(祝)"),
+ (0x3238, "3", "(労)"),
+ (0x3239, "3", "(代)"),
+ (0x323A, "3", "(呼)"),
+ (0x323B, "3", "(学)"),
+ (0x323C, "3", "(監)"),
+ (0x323D, "3", "(企)"),
+ (0x323E, "3", "(資)"),
+ (0x323F, "3", "(協)"),
+ (0x3240, "3", "(祭)"),
+ (0x3241, "3", "(休)"),
+ (0x3242, "3", "(自)"),
+ (0x3243, "3", "(至)"),
+ (0x3244, "M", "問"),
+ (0x3245, "M", "幼"),
+ (0x3246, "M", "文"),
+ (0x3247, "M", "箏"),
+ (0x3248, "V"),
+ (0x3250, "M", "pte"),
+ (0x3251, "M", "21"),
+ ]
+
+
+def _seg_31() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x3252, "M", "22"),
+ (0x3253, "M", "23"),
+ (0x3254, "M", "24"),
+ (0x3255, "M", "25"),
+ (0x3256, "M", "26"),
+ (0x3257, "M", "27"),
+ (0x3258, "M", "28"),
+ (0x3259, "M", "29"),
+ (0x325A, "M", "30"),
+ (0x325B, "M", "31"),
+ (0x325C, "M", "32"),
+ (0x325D, "M", "33"),
+ (0x325E, "M", "34"),
+ (0x325F, "M", "35"),
+ (0x3260, "M", "ᄀ"),
+ (0x3261, "M", "ᄂ"),
+ (0x3262, "M", "ᄃ"),
+ (0x3263, "M", "ᄅ"),
+ (0x3264, "M", "ᄆ"),
+ (0x3265, "M", "ᄇ"),
+ (0x3266, "M", "ᄉ"),
+ (0x3267, "M", "ᄋ"),
+ (0x3268, "M", "ᄌ"),
+ (0x3269, "M", "ᄎ"),
+ (0x326A, "M", "ᄏ"),
+ (0x326B, "M", "ᄐ"),
+ (0x326C, "M", "ᄑ"),
+ (0x326D, "M", "ᄒ"),
+ (0x326E, "M", "가"),
+ (0x326F, "M", "나"),
+ (0x3270, "M", "다"),
+ (0x3271, "M", "라"),
+ (0x3272, "M", "마"),
+ (0x3273, "M", "바"),
+ (0x3274, "M", "사"),
+ (0x3275, "M", "아"),
+ (0x3276, "M", "자"),
+ (0x3277, "M", "차"),
+ (0x3278, "M", "카"),
+ (0x3279, "M", "타"),
+ (0x327A, "M", "파"),
+ (0x327B, "M", "하"),
+ (0x327C, "M", "참고"),
+ (0x327D, "M", "주의"),
+ (0x327E, "M", "우"),
+ (0x327F, "V"),
+ (0x3280, "M", "一"),
+ (0x3281, "M", "二"),
+ (0x3282, "M", "三"),
+ (0x3283, "M", "四"),
+ (0x3284, "M", "五"),
+ (0x3285, "M", "六"),
+ (0x3286, "M", "七"),
+ (0x3287, "M", "八"),
+ (0x3288, "M", "九"),
+ (0x3289, "M", "十"),
+ (0x328A, "M", "月"),
+ (0x328B, "M", "火"),
+ (0x328C, "M", "水"),
+ (0x328D, "M", "木"),
+ (0x328E, "M", "金"),
+ (0x328F, "M", "土"),
+ (0x3290, "M", "日"),
+ (0x3291, "M", "株"),
+ (0x3292, "M", "有"),
+ (0x3293, "M", "社"),
+ (0x3294, "M", "名"),
+ (0x3295, "M", "特"),
+ (0x3296, "M", "財"),
+ (0x3297, "M", "祝"),
+ (0x3298, "M", "労"),
+ (0x3299, "M", "秘"),
+ (0x329A, "M", "男"),
+ (0x329B, "M", "女"),
+ (0x329C, "M", "適"),
+ (0x329D, "M", "優"),
+ (0x329E, "M", "印"),
+ (0x329F, "M", "注"),
+ (0x32A0, "M", "項"),
+ (0x32A1, "M", "休"),
+ (0x32A2, "M", "写"),
+ (0x32A3, "M", "正"),
+ (0x32A4, "M", "上"),
+ (0x32A5, "M", "中"),
+ (0x32A6, "M", "下"),
+ (0x32A7, "M", "左"),
+ (0x32A8, "M", "右"),
+ (0x32A9, "M", "医"),
+ (0x32AA, "M", "宗"),
+ (0x32AB, "M", "学"),
+ (0x32AC, "M", "監"),
+ (0x32AD, "M", "企"),
+ (0x32AE, "M", "資"),
+ (0x32AF, "M", "協"),
+ (0x32B0, "M", "夜"),
+ (0x32B1, "M", "36"),
+ (0x32B2, "M", "37"),
+ (0x32B3, "M", "38"),
+ (0x32B4, "M", "39"),
+ (0x32B5, "M", "40"),
+ ]
+
+
+def _seg_32() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x32B6, "M", "41"),
+ (0x32B7, "M", "42"),
+ (0x32B8, "M", "43"),
+ (0x32B9, "M", "44"),
+ (0x32BA, "M", "45"),
+ (0x32BB, "M", "46"),
+ (0x32BC, "M", "47"),
+ (0x32BD, "M", "48"),
+ (0x32BE, "M", "49"),
+ (0x32BF, "M", "50"),
+ (0x32C0, "M", "1月"),
+ (0x32C1, "M", "2月"),
+ (0x32C2, "M", "3月"),
+ (0x32C3, "M", "4月"),
+ (0x32C4, "M", "5月"),
+ (0x32C5, "M", "6月"),
+ (0x32C6, "M", "7月"),
+ (0x32C7, "M", "8月"),
+ (0x32C8, "M", "9月"),
+ (0x32C9, "M", "10月"),
+ (0x32CA, "M", "11月"),
+ (0x32CB, "M", "12月"),
+ (0x32CC, "M", "hg"),
+ (0x32CD, "M", "erg"),
+ (0x32CE, "M", "ev"),
+ (0x32CF, "M", "ltd"),
+ (0x32D0, "M", "ア"),
+ (0x32D1, "M", "イ"),
+ (0x32D2, "M", "ウ"),
+ (0x32D3, "M", "エ"),
+ (0x32D4, "M", "オ"),
+ (0x32D5, "M", "カ"),
+ (0x32D6, "M", "キ"),
+ (0x32D7, "M", "ク"),
+ (0x32D8, "M", "ケ"),
+ (0x32D9, "M", "コ"),
+ (0x32DA, "M", "サ"),
+ (0x32DB, "M", "シ"),
+ (0x32DC, "M", "ス"),
+ (0x32DD, "M", "セ"),
+ (0x32DE, "M", "ソ"),
+ (0x32DF, "M", "タ"),
+ (0x32E0, "M", "チ"),
+ (0x32E1, "M", "ツ"),
+ (0x32E2, "M", "テ"),
+ (0x32E3, "M", "ト"),
+ (0x32E4, "M", "ナ"),
+ (0x32E5, "M", "ニ"),
+ (0x32E6, "M", "ヌ"),
+ (0x32E7, "M", "ネ"),
+ (0x32E8, "M", "ノ"),
+ (0x32E9, "M", "ハ"),
+ (0x32EA, "M", "ヒ"),
+ (0x32EB, "M", "フ"),
+ (0x32EC, "M", "ヘ"),
+ (0x32ED, "M", "ホ"),
+ (0x32EE, "M", "マ"),
+ (0x32EF, "M", "ミ"),
+ (0x32F0, "M", "ム"),
+ (0x32F1, "M", "メ"),
+ (0x32F2, "M", "モ"),
+ (0x32F3, "M", "ヤ"),
+ (0x32F4, "M", "ユ"),
+ (0x32F5, "M", "ヨ"),
+ (0x32F6, "M", "ラ"),
+ (0x32F7, "M", "リ"),
+ (0x32F8, "M", "ル"),
+ (0x32F9, "M", "レ"),
+ (0x32FA, "M", "ロ"),
+ (0x32FB, "M", "ワ"),
+ (0x32FC, "M", "ヰ"),
+ (0x32FD, "M", "ヱ"),
+ (0x32FE, "M", "ヲ"),
+ (0x32FF, "M", "令和"),
+ (0x3300, "M", "アパート"),
+ (0x3301, "M", "アルファ"),
+ (0x3302, "M", "アンペア"),
+ (0x3303, "M", "アール"),
+ (0x3304, "M", "イニング"),
+ (0x3305, "M", "インチ"),
+ (0x3306, "M", "ウォン"),
+ (0x3307, "M", "エスクード"),
+ (0x3308, "M", "エーカー"),
+ (0x3309, "M", "オンス"),
+ (0x330A, "M", "オーム"),
+ (0x330B, "M", "カイリ"),
+ (0x330C, "M", "カラット"),
+ (0x330D, "M", "カロリー"),
+ (0x330E, "M", "ガロン"),
+ (0x330F, "M", "ガンマ"),
+ (0x3310, "M", "ギガ"),
+ (0x3311, "M", "ギニー"),
+ (0x3312, "M", "キュリー"),
+ (0x3313, "M", "ギルダー"),
+ (0x3314, "M", "キロ"),
+ (0x3315, "M", "キログラム"),
+ (0x3316, "M", "キロメートル"),
+ (0x3317, "M", "キロワット"),
+ (0x3318, "M", "グラム"),
+ (0x3319, "M", "グラムトン"),
+ ]
+
+
+def _seg_33() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x331A, "M", "クルゼイロ"),
+ (0x331B, "M", "クローネ"),
+ (0x331C, "M", "ケース"),
+ (0x331D, "M", "コルナ"),
+ (0x331E, "M", "コーポ"),
+ (0x331F, "M", "サイクル"),
+ (0x3320, "M", "サンチーム"),
+ (0x3321, "M", "シリング"),
+ (0x3322, "M", "センチ"),
+ (0x3323, "M", "セント"),
+ (0x3324, "M", "ダース"),
+ (0x3325, "M", "デシ"),
+ (0x3326, "M", "ドル"),
+ (0x3327, "M", "トン"),
+ (0x3328, "M", "ナノ"),
+ (0x3329, "M", "ノット"),
+ (0x332A, "M", "ハイツ"),
+ (0x332B, "M", "パーセント"),
+ (0x332C, "M", "パーツ"),
+ (0x332D, "M", "バーレル"),
+ (0x332E, "M", "ピアストル"),
+ (0x332F, "M", "ピクル"),
+ (0x3330, "M", "ピコ"),
+ (0x3331, "M", "ビル"),
+ (0x3332, "M", "ファラッド"),
+ (0x3333, "M", "フィート"),
+ (0x3334, "M", "ブッシェル"),
+ (0x3335, "M", "フラン"),
+ (0x3336, "M", "ヘクタール"),
+ (0x3337, "M", "ペソ"),
+ (0x3338, "M", "ペニヒ"),
+ (0x3339, "M", "ヘルツ"),
+ (0x333A, "M", "ペンス"),
+ (0x333B, "M", "ページ"),
+ (0x333C, "M", "ベータ"),
+ (0x333D, "M", "ポイント"),
+ (0x333E, "M", "ボルト"),
+ (0x333F, "M", "ホン"),
+ (0x3340, "M", "ポンド"),
+ (0x3341, "M", "ホール"),
+ (0x3342, "M", "ホーン"),
+ (0x3343, "M", "マイクロ"),
+ (0x3344, "M", "マイル"),
+ (0x3345, "M", "マッハ"),
+ (0x3346, "M", "マルク"),
+ (0x3347, "M", "マンション"),
+ (0x3348, "M", "ミクロン"),
+ (0x3349, "M", "ミリ"),
+ (0x334A, "M", "ミリバール"),
+ (0x334B, "M", "メガ"),
+ (0x334C, "M", "メガトン"),
+ (0x334D, "M", "メートル"),
+ (0x334E, "M", "ヤード"),
+ (0x334F, "M", "ヤール"),
+ (0x3350, "M", "ユアン"),
+ (0x3351, "M", "リットル"),
+ (0x3352, "M", "リラ"),
+ (0x3353, "M", "ルピー"),
+ (0x3354, "M", "ルーブル"),
+ (0x3355, "M", "レム"),
+ (0x3356, "M", "レントゲン"),
+ (0x3357, "M", "ワット"),
+ (0x3358, "M", "0点"),
+ (0x3359, "M", "1点"),
+ (0x335A, "M", "2点"),
+ (0x335B, "M", "3点"),
+ (0x335C, "M", "4点"),
+ (0x335D, "M", "5点"),
+ (0x335E, "M", "6点"),
+ (0x335F, "M", "7点"),
+ (0x3360, "M", "8点"),
+ (0x3361, "M", "9点"),
+ (0x3362, "M", "10点"),
+ (0x3363, "M", "11点"),
+ (0x3364, "M", "12点"),
+ (0x3365, "M", "13点"),
+ (0x3366, "M", "14点"),
+ (0x3367, "M", "15点"),
+ (0x3368, "M", "16点"),
+ (0x3369, "M", "17点"),
+ (0x336A, "M", "18点"),
+ (0x336B, "M", "19点"),
+ (0x336C, "M", "20点"),
+ (0x336D, "M", "21点"),
+ (0x336E, "M", "22点"),
+ (0x336F, "M", "23点"),
+ (0x3370, "M", "24点"),
+ (0x3371, "M", "hpa"),
+ (0x3372, "M", "da"),
+ (0x3373, "M", "au"),
+ (0x3374, "M", "bar"),
+ (0x3375, "M", "ov"),
+ (0x3376, "M", "pc"),
+ (0x3377, "M", "dm"),
+ (0x3378, "M", "dm2"),
+ (0x3379, "M", "dm3"),
+ (0x337A, "M", "iu"),
+ (0x337B, "M", "平成"),
+ (0x337C, "M", "昭和"),
+ (0x337D, "M", "大正"),
+ ]
+
+
+def _seg_34() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x337E, "M", "明治"),
+ (0x337F, "M", "株式会社"),
+ (0x3380, "M", "pa"),
+ (0x3381, "M", "na"),
+ (0x3382, "M", "μa"),
+ (0x3383, "M", "ma"),
+ (0x3384, "M", "ka"),
+ (0x3385, "M", "kb"),
+ (0x3386, "M", "mb"),
+ (0x3387, "M", "gb"),
+ (0x3388, "M", "cal"),
+ (0x3389, "M", "kcal"),
+ (0x338A, "M", "pf"),
+ (0x338B, "M", "nf"),
+ (0x338C, "M", "μf"),
+ (0x338D, "M", "μg"),
+ (0x338E, "M", "mg"),
+ (0x338F, "M", "kg"),
+ (0x3390, "M", "hz"),
+ (0x3391, "M", "khz"),
+ (0x3392, "M", "mhz"),
+ (0x3393, "M", "ghz"),
+ (0x3394, "M", "thz"),
+ (0x3395, "M", "μl"),
+ (0x3396, "M", "ml"),
+ (0x3397, "M", "dl"),
+ (0x3398, "M", "kl"),
+ (0x3399, "M", "fm"),
+ (0x339A, "M", "nm"),
+ (0x339B, "M", "μm"),
+ (0x339C, "M", "mm"),
+ (0x339D, "M", "cm"),
+ (0x339E, "M", "km"),
+ (0x339F, "M", "mm2"),
+ (0x33A0, "M", "cm2"),
+ (0x33A1, "M", "m2"),
+ (0x33A2, "M", "km2"),
+ (0x33A3, "M", "mm3"),
+ (0x33A4, "M", "cm3"),
+ (0x33A5, "M", "m3"),
+ (0x33A6, "M", "km3"),
+ (0x33A7, "M", "m∕s"),
+ (0x33A8, "M", "m∕s2"),
+ (0x33A9, "M", "pa"),
+ (0x33AA, "M", "kpa"),
+ (0x33AB, "M", "mpa"),
+ (0x33AC, "M", "gpa"),
+ (0x33AD, "M", "rad"),
+ (0x33AE, "M", "rad∕s"),
+ (0x33AF, "M", "rad∕s2"),
+ (0x33B0, "M", "ps"),
+ (0x33B1, "M", "ns"),
+ (0x33B2, "M", "μs"),
+ (0x33B3, "M", "ms"),
+ (0x33B4, "M", "pv"),
+ (0x33B5, "M", "nv"),
+ (0x33B6, "M", "μv"),
+ (0x33B7, "M", "mv"),
+ (0x33B8, "M", "kv"),
+ (0x33B9, "M", "mv"),
+ (0x33BA, "M", "pw"),
+ (0x33BB, "M", "nw"),
+ (0x33BC, "M", "μw"),
+ (0x33BD, "M", "mw"),
+ (0x33BE, "M", "kw"),
+ (0x33BF, "M", "mw"),
+ (0x33C0, "M", "kω"),
+ (0x33C1, "M", "mω"),
+ (0x33C2, "X"),
+ (0x33C3, "M", "bq"),
+ (0x33C4, "M", "cc"),
+ (0x33C5, "M", "cd"),
+ (0x33C6, "M", "c∕kg"),
+ (0x33C7, "X"),
+ (0x33C8, "M", "db"),
+ (0x33C9, "M", "gy"),
+ (0x33CA, "M", "ha"),
+ (0x33CB, "M", "hp"),
+ (0x33CC, "M", "in"),
+ (0x33CD, "M", "kk"),
+ (0x33CE, "M", "km"),
+ (0x33CF, "M", "kt"),
+ (0x33D0, "M", "lm"),
+ (0x33D1, "M", "ln"),
+ (0x33D2, "M", "log"),
+ (0x33D3, "M", "lx"),
+ (0x33D4, "M", "mb"),
+ (0x33D5, "M", "mil"),
+ (0x33D6, "M", "mol"),
+ (0x33D7, "M", "ph"),
+ (0x33D8, "X"),
+ (0x33D9, "M", "ppm"),
+ (0x33DA, "M", "pr"),
+ (0x33DB, "M", "sr"),
+ (0x33DC, "M", "sv"),
+ (0x33DD, "M", "wb"),
+ (0x33DE, "M", "v∕m"),
+ (0x33DF, "M", "a∕m"),
+ (0x33E0, "M", "1日"),
+ (0x33E1, "M", "2日"),
+ ]
+
+
+def _seg_35() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x33E2, "M", "3日"),
+ (0x33E3, "M", "4日"),
+ (0x33E4, "M", "5日"),
+ (0x33E5, "M", "6日"),
+ (0x33E6, "M", "7日"),
+ (0x33E7, "M", "8日"),
+ (0x33E8, "M", "9日"),
+ (0x33E9, "M", "10日"),
+ (0x33EA, "M", "11日"),
+ (0x33EB, "M", "12日"),
+ (0x33EC, "M", "13日"),
+ (0x33ED, "M", "14日"),
+ (0x33EE, "M", "15日"),
+ (0x33EF, "M", "16日"),
+ (0x33F0, "M", "17日"),
+ (0x33F1, "M", "18日"),
+ (0x33F2, "M", "19日"),
+ (0x33F3, "M", "20日"),
+ (0x33F4, "M", "21日"),
+ (0x33F5, "M", "22日"),
+ (0x33F6, "M", "23日"),
+ (0x33F7, "M", "24日"),
+ (0x33F8, "M", "25日"),
+ (0x33F9, "M", "26日"),
+ (0x33FA, "M", "27日"),
+ (0x33FB, "M", "28日"),
+ (0x33FC, "M", "29日"),
+ (0x33FD, "M", "30日"),
+ (0x33FE, "M", "31日"),
+ (0x33FF, "M", "gal"),
+ (0x3400, "V"),
+ (0xA48D, "X"),
+ (0xA490, "V"),
+ (0xA4C7, "X"),
+ (0xA4D0, "V"),
+ (0xA62C, "X"),
+ (0xA640, "M", "ꙁ"),
+ (0xA641, "V"),
+ (0xA642, "M", "ꙃ"),
+ (0xA643, "V"),
+ (0xA644, "M", "ꙅ"),
+ (0xA645, "V"),
+ (0xA646, "M", "ꙇ"),
+ (0xA647, "V"),
+ (0xA648, "M", "ꙉ"),
+ (0xA649, "V"),
+ (0xA64A, "M", "ꙋ"),
+ (0xA64B, "V"),
+ (0xA64C, "M", "ꙍ"),
+ (0xA64D, "V"),
+ (0xA64E, "M", "ꙏ"),
+ (0xA64F, "V"),
+ (0xA650, "M", "ꙑ"),
+ (0xA651, "V"),
+ (0xA652, "M", "ꙓ"),
+ (0xA653, "V"),
+ (0xA654, "M", "ꙕ"),
+ (0xA655, "V"),
+ (0xA656, "M", "ꙗ"),
+ (0xA657, "V"),
+ (0xA658, "M", "ꙙ"),
+ (0xA659, "V"),
+ (0xA65A, "M", "ꙛ"),
+ (0xA65B, "V"),
+ (0xA65C, "M", "ꙝ"),
+ (0xA65D, "V"),
+ (0xA65E, "M", "ꙟ"),
+ (0xA65F, "V"),
+ (0xA660, "M", "ꙡ"),
+ (0xA661, "V"),
+ (0xA662, "M", "ꙣ"),
+ (0xA663, "V"),
+ (0xA664, "M", "ꙥ"),
+ (0xA665, "V"),
+ (0xA666, "M", "ꙧ"),
+ (0xA667, "V"),
+ (0xA668, "M", "ꙩ"),
+ (0xA669, "V"),
+ (0xA66A, "M", "ꙫ"),
+ (0xA66B, "V"),
+ (0xA66C, "M", "ꙭ"),
+ (0xA66D, "V"),
+ (0xA680, "M", "ꚁ"),
+ (0xA681, "V"),
+ (0xA682, "M", "ꚃ"),
+ (0xA683, "V"),
+ (0xA684, "M", "ꚅ"),
+ (0xA685, "V"),
+ (0xA686, "M", "ꚇ"),
+ (0xA687, "V"),
+ (0xA688, "M", "ꚉ"),
+ (0xA689, "V"),
+ (0xA68A, "M", "ꚋ"),
+ (0xA68B, "V"),
+ (0xA68C, "M", "ꚍ"),
+ (0xA68D, "V"),
+ (0xA68E, "M", "ꚏ"),
+ (0xA68F, "V"),
+ (0xA690, "M", "ꚑ"),
+ (0xA691, "V"),
+ ]
+
+
+def _seg_36() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xA692, "M", "ꚓ"),
+ (0xA693, "V"),
+ (0xA694, "M", "ꚕ"),
+ (0xA695, "V"),
+ (0xA696, "M", "ꚗ"),
+ (0xA697, "V"),
+ (0xA698, "M", "ꚙ"),
+ (0xA699, "V"),
+ (0xA69A, "M", "ꚛ"),
+ (0xA69B, "V"),
+ (0xA69C, "M", "ъ"),
+ (0xA69D, "M", "ь"),
+ (0xA69E, "V"),
+ (0xA6F8, "X"),
+ (0xA700, "V"),
+ (0xA722, "M", "ꜣ"),
+ (0xA723, "V"),
+ (0xA724, "M", "ꜥ"),
+ (0xA725, "V"),
+ (0xA726, "M", "ꜧ"),
+ (0xA727, "V"),
+ (0xA728, "M", "ꜩ"),
+ (0xA729, "V"),
+ (0xA72A, "M", "ꜫ"),
+ (0xA72B, "V"),
+ (0xA72C, "M", "ꜭ"),
+ (0xA72D, "V"),
+ (0xA72E, "M", "ꜯ"),
+ (0xA72F, "V"),
+ (0xA732, "M", "ꜳ"),
+ (0xA733, "V"),
+ (0xA734, "M", "ꜵ"),
+ (0xA735, "V"),
+ (0xA736, "M", "ꜷ"),
+ (0xA737, "V"),
+ (0xA738, "M", "ꜹ"),
+ (0xA739, "V"),
+ (0xA73A, "M", "ꜻ"),
+ (0xA73B, "V"),
+ (0xA73C, "M", "ꜽ"),
+ (0xA73D, "V"),
+ (0xA73E, "M", "ꜿ"),
+ (0xA73F, "V"),
+ (0xA740, "M", "ꝁ"),
+ (0xA741, "V"),
+ (0xA742, "M", "ꝃ"),
+ (0xA743, "V"),
+ (0xA744, "M", "ꝅ"),
+ (0xA745, "V"),
+ (0xA746, "M", "ꝇ"),
+ (0xA747, "V"),
+ (0xA748, "M", "ꝉ"),
+ (0xA749, "V"),
+ (0xA74A, "M", "ꝋ"),
+ (0xA74B, "V"),
+ (0xA74C, "M", "ꝍ"),
+ (0xA74D, "V"),
+ (0xA74E, "M", "ꝏ"),
+ (0xA74F, "V"),
+ (0xA750, "M", "ꝑ"),
+ (0xA751, "V"),
+ (0xA752, "M", "ꝓ"),
+ (0xA753, "V"),
+ (0xA754, "M", "ꝕ"),
+ (0xA755, "V"),
+ (0xA756, "M", "ꝗ"),
+ (0xA757, "V"),
+ (0xA758, "M", "ꝙ"),
+ (0xA759, "V"),
+ (0xA75A, "M", "ꝛ"),
+ (0xA75B, "V"),
+ (0xA75C, "M", "ꝝ"),
+ (0xA75D, "V"),
+ (0xA75E, "M", "ꝟ"),
+ (0xA75F, "V"),
+ (0xA760, "M", "ꝡ"),
+ (0xA761, "V"),
+ (0xA762, "M", "ꝣ"),
+ (0xA763, "V"),
+ (0xA764, "M", "ꝥ"),
+ (0xA765, "V"),
+ (0xA766, "M", "ꝧ"),
+ (0xA767, "V"),
+ (0xA768, "M", "ꝩ"),
+ (0xA769, "V"),
+ (0xA76A, "M", "ꝫ"),
+ (0xA76B, "V"),
+ (0xA76C, "M", "ꝭ"),
+ (0xA76D, "V"),
+ (0xA76E, "M", "ꝯ"),
+ (0xA76F, "V"),
+ (0xA770, "M", "ꝯ"),
+ (0xA771, "V"),
+ (0xA779, "M", "ꝺ"),
+ (0xA77A, "V"),
+ (0xA77B, "M", "ꝼ"),
+ (0xA77C, "V"),
+ (0xA77D, "M", "ᵹ"),
+ (0xA77E, "M", "ꝿ"),
+ (0xA77F, "V"),
+ ]
+
+
+def _seg_37() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xA780, "M", "ꞁ"),
+ (0xA781, "V"),
+ (0xA782, "M", "ꞃ"),
+ (0xA783, "V"),
+ (0xA784, "M", "ꞅ"),
+ (0xA785, "V"),
+ (0xA786, "M", "ꞇ"),
+ (0xA787, "V"),
+ (0xA78B, "M", "ꞌ"),
+ (0xA78C, "V"),
+ (0xA78D, "M", "ɥ"),
+ (0xA78E, "V"),
+ (0xA790, "M", "ꞑ"),
+ (0xA791, "V"),
+ (0xA792, "M", "ꞓ"),
+ (0xA793, "V"),
+ (0xA796, "M", "ꞗ"),
+ (0xA797, "V"),
+ (0xA798, "M", "ꞙ"),
+ (0xA799, "V"),
+ (0xA79A, "M", "ꞛ"),
+ (0xA79B, "V"),
+ (0xA79C, "M", "ꞝ"),
+ (0xA79D, "V"),
+ (0xA79E, "M", "ꞟ"),
+ (0xA79F, "V"),
+ (0xA7A0, "M", "ꞡ"),
+ (0xA7A1, "V"),
+ (0xA7A2, "M", "ꞣ"),
+ (0xA7A3, "V"),
+ (0xA7A4, "M", "ꞥ"),
+ (0xA7A5, "V"),
+ (0xA7A6, "M", "ꞧ"),
+ (0xA7A7, "V"),
+ (0xA7A8, "M", "ꞩ"),
+ (0xA7A9, "V"),
+ (0xA7AA, "M", "ɦ"),
+ (0xA7AB, "M", "ɜ"),
+ (0xA7AC, "M", "ɡ"),
+ (0xA7AD, "M", "ɬ"),
+ (0xA7AE, "M", "ɪ"),
+ (0xA7AF, "V"),
+ (0xA7B0, "M", "ʞ"),
+ (0xA7B1, "M", "ʇ"),
+ (0xA7B2, "M", "ʝ"),
+ (0xA7B3, "M", "ꭓ"),
+ (0xA7B4, "M", "ꞵ"),
+ (0xA7B5, "V"),
+ (0xA7B6, "M", "ꞷ"),
+ (0xA7B7, "V"),
+ (0xA7B8, "M", "ꞹ"),
+ (0xA7B9, "V"),
+ (0xA7BA, "M", "ꞻ"),
+ (0xA7BB, "V"),
+ (0xA7BC, "M", "ꞽ"),
+ (0xA7BD, "V"),
+ (0xA7BE, "M", "ꞿ"),
+ (0xA7BF, "V"),
+ (0xA7C0, "M", "ꟁ"),
+ (0xA7C1, "V"),
+ (0xA7C2, "M", "ꟃ"),
+ (0xA7C3, "V"),
+ (0xA7C4, "M", "ꞔ"),
+ (0xA7C5, "M", "ʂ"),
+ (0xA7C6, "M", "ᶎ"),
+ (0xA7C7, "M", "ꟈ"),
+ (0xA7C8, "V"),
+ (0xA7C9, "M", "ꟊ"),
+ (0xA7CA, "V"),
+ (0xA7CB, "X"),
+ (0xA7D0, "M", "ꟑ"),
+ (0xA7D1, "V"),
+ (0xA7D2, "X"),
+ (0xA7D3, "V"),
+ (0xA7D4, "X"),
+ (0xA7D5, "V"),
+ (0xA7D6, "M", "ꟗ"),
+ (0xA7D7, "V"),
+ (0xA7D8, "M", "ꟙ"),
+ (0xA7D9, "V"),
+ (0xA7DA, "X"),
+ (0xA7F2, "M", "c"),
+ (0xA7F3, "M", "f"),
+ (0xA7F4, "M", "q"),
+ (0xA7F5, "M", "ꟶ"),
+ (0xA7F6, "V"),
+ (0xA7F8, "M", "ħ"),
+ (0xA7F9, "M", "œ"),
+ (0xA7FA, "V"),
+ (0xA82D, "X"),
+ (0xA830, "V"),
+ (0xA83A, "X"),
+ (0xA840, "V"),
+ (0xA878, "X"),
+ (0xA880, "V"),
+ (0xA8C6, "X"),
+ (0xA8CE, "V"),
+ (0xA8DA, "X"),
+ (0xA8E0, "V"),
+ (0xA954, "X"),
+ ]
+
+
+def _seg_38() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xA95F, "V"),
+ (0xA97D, "X"),
+ (0xA980, "V"),
+ (0xA9CE, "X"),
+ (0xA9CF, "V"),
+ (0xA9DA, "X"),
+ (0xA9DE, "V"),
+ (0xA9FF, "X"),
+ (0xAA00, "V"),
+ (0xAA37, "X"),
+ (0xAA40, "V"),
+ (0xAA4E, "X"),
+ (0xAA50, "V"),
+ (0xAA5A, "X"),
+ (0xAA5C, "V"),
+ (0xAAC3, "X"),
+ (0xAADB, "V"),
+ (0xAAF7, "X"),
+ (0xAB01, "V"),
+ (0xAB07, "X"),
+ (0xAB09, "V"),
+ (0xAB0F, "X"),
+ (0xAB11, "V"),
+ (0xAB17, "X"),
+ (0xAB20, "V"),
+ (0xAB27, "X"),
+ (0xAB28, "V"),
+ (0xAB2F, "X"),
+ (0xAB30, "V"),
+ (0xAB5C, "M", "ꜧ"),
+ (0xAB5D, "M", "ꬷ"),
+ (0xAB5E, "M", "ɫ"),
+ (0xAB5F, "M", "ꭒ"),
+ (0xAB60, "V"),
+ (0xAB69, "M", "ʍ"),
+ (0xAB6A, "V"),
+ (0xAB6C, "X"),
+ (0xAB70, "M", "Ꭰ"),
+ (0xAB71, "M", "Ꭱ"),
+ (0xAB72, "M", "Ꭲ"),
+ (0xAB73, "M", "Ꭳ"),
+ (0xAB74, "M", "Ꭴ"),
+ (0xAB75, "M", "Ꭵ"),
+ (0xAB76, "M", "Ꭶ"),
+ (0xAB77, "M", "Ꭷ"),
+ (0xAB78, "M", "Ꭸ"),
+ (0xAB79, "M", "Ꭹ"),
+ (0xAB7A, "M", "Ꭺ"),
+ (0xAB7B, "M", "Ꭻ"),
+ (0xAB7C, "M", "Ꭼ"),
+ (0xAB7D, "M", "Ꭽ"),
+ (0xAB7E, "M", "Ꭾ"),
+ (0xAB7F, "M", "Ꭿ"),
+ (0xAB80, "M", "Ꮀ"),
+ (0xAB81, "M", "Ꮁ"),
+ (0xAB82, "M", "Ꮂ"),
+ (0xAB83, "M", "Ꮃ"),
+ (0xAB84, "M", "Ꮄ"),
+ (0xAB85, "M", "Ꮅ"),
+ (0xAB86, "M", "Ꮆ"),
+ (0xAB87, "M", "Ꮇ"),
+ (0xAB88, "M", "Ꮈ"),
+ (0xAB89, "M", "Ꮉ"),
+ (0xAB8A, "M", "Ꮊ"),
+ (0xAB8B, "M", "Ꮋ"),
+ (0xAB8C, "M", "Ꮌ"),
+ (0xAB8D, "M", "Ꮍ"),
+ (0xAB8E, "M", "Ꮎ"),
+ (0xAB8F, "M", "Ꮏ"),
+ (0xAB90, "M", "Ꮐ"),
+ (0xAB91, "M", "Ꮑ"),
+ (0xAB92, "M", "Ꮒ"),
+ (0xAB93, "M", "Ꮓ"),
+ (0xAB94, "M", "Ꮔ"),
+ (0xAB95, "M", "Ꮕ"),
+ (0xAB96, "M", "Ꮖ"),
+ (0xAB97, "M", "Ꮗ"),
+ (0xAB98, "M", "Ꮘ"),
+ (0xAB99, "M", "Ꮙ"),
+ (0xAB9A, "M", "Ꮚ"),
+ (0xAB9B, "M", "Ꮛ"),
+ (0xAB9C, "M", "Ꮜ"),
+ (0xAB9D, "M", "Ꮝ"),
+ (0xAB9E, "M", "Ꮞ"),
+ (0xAB9F, "M", "Ꮟ"),
+ (0xABA0, "M", "Ꮠ"),
+ (0xABA1, "M", "Ꮡ"),
+ (0xABA2, "M", "Ꮢ"),
+ (0xABA3, "M", "Ꮣ"),
+ (0xABA4, "M", "Ꮤ"),
+ (0xABA5, "M", "Ꮥ"),
+ (0xABA6, "M", "Ꮦ"),
+ (0xABA7, "M", "Ꮧ"),
+ (0xABA8, "M", "Ꮨ"),
+ (0xABA9, "M", "Ꮩ"),
+ (0xABAA, "M", "Ꮪ"),
+ (0xABAB, "M", "Ꮫ"),
+ (0xABAC, "M", "Ꮬ"),
+ (0xABAD, "M", "Ꮭ"),
+ (0xABAE, "M", "Ꮮ"),
+ ]
+
+
+def _seg_39() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xABAF, "M", "Ꮯ"),
+ (0xABB0, "M", "Ꮰ"),
+ (0xABB1, "M", "Ꮱ"),
+ (0xABB2, "M", "Ꮲ"),
+ (0xABB3, "M", "Ꮳ"),
+ (0xABB4, "M", "Ꮴ"),
+ (0xABB5, "M", "Ꮵ"),
+ (0xABB6, "M", "Ꮶ"),
+ (0xABB7, "M", "Ꮷ"),
+ (0xABB8, "M", "Ꮸ"),
+ (0xABB9, "M", "Ꮹ"),
+ (0xABBA, "M", "Ꮺ"),
+ (0xABBB, "M", "Ꮻ"),
+ (0xABBC, "M", "Ꮼ"),
+ (0xABBD, "M", "Ꮽ"),
+ (0xABBE, "M", "Ꮾ"),
+ (0xABBF, "M", "Ꮿ"),
+ (0xABC0, "V"),
+ (0xABEE, "X"),
+ (0xABF0, "V"),
+ (0xABFA, "X"),
+ (0xAC00, "V"),
+ (0xD7A4, "X"),
+ (0xD7B0, "V"),
+ (0xD7C7, "X"),
+ (0xD7CB, "V"),
+ (0xD7FC, "X"),
+ (0xF900, "M", "豈"),
+ (0xF901, "M", "更"),
+ (0xF902, "M", "車"),
+ (0xF903, "M", "賈"),
+ (0xF904, "M", "滑"),
+ (0xF905, "M", "串"),
+ (0xF906, "M", "句"),
+ (0xF907, "M", "龜"),
+ (0xF909, "M", "契"),
+ (0xF90A, "M", "金"),
+ (0xF90B, "M", "喇"),
+ (0xF90C, "M", "奈"),
+ (0xF90D, "M", "懶"),
+ (0xF90E, "M", "癩"),
+ (0xF90F, "M", "羅"),
+ (0xF910, "M", "蘿"),
+ (0xF911, "M", "螺"),
+ (0xF912, "M", "裸"),
+ (0xF913, "M", "邏"),
+ (0xF914, "M", "樂"),
+ (0xF915, "M", "洛"),
+ (0xF916, "M", "烙"),
+ (0xF917, "M", "珞"),
+ (0xF918, "M", "落"),
+ (0xF919, "M", "酪"),
+ (0xF91A, "M", "駱"),
+ (0xF91B, "M", "亂"),
+ (0xF91C, "M", "卵"),
+ (0xF91D, "M", "欄"),
+ (0xF91E, "M", "爛"),
+ (0xF91F, "M", "蘭"),
+ (0xF920, "M", "鸞"),
+ (0xF921, "M", "嵐"),
+ (0xF922, "M", "濫"),
+ (0xF923, "M", "藍"),
+ (0xF924, "M", "襤"),
+ (0xF925, "M", "拉"),
+ (0xF926, "M", "臘"),
+ (0xF927, "M", "蠟"),
+ (0xF928, "M", "廊"),
+ (0xF929, "M", "朗"),
+ (0xF92A, "M", "浪"),
+ (0xF92B, "M", "狼"),
+ (0xF92C, "M", "郎"),
+ (0xF92D, "M", "來"),
+ (0xF92E, "M", "冷"),
+ (0xF92F, "M", "勞"),
+ (0xF930, "M", "擄"),
+ (0xF931, "M", "櫓"),
+ (0xF932, "M", "爐"),
+ (0xF933, "M", "盧"),
+ (0xF934, "M", "老"),
+ (0xF935, "M", "蘆"),
+ (0xF936, "M", "虜"),
+ (0xF937, "M", "路"),
+ (0xF938, "M", "露"),
+ (0xF939, "M", "魯"),
+ (0xF93A, "M", "鷺"),
+ (0xF93B, "M", "碌"),
+ (0xF93C, "M", "祿"),
+ (0xF93D, "M", "綠"),
+ (0xF93E, "M", "菉"),
+ (0xF93F, "M", "錄"),
+ (0xF940, "M", "鹿"),
+ (0xF941, "M", "論"),
+ (0xF942, "M", "壟"),
+ (0xF943, "M", "弄"),
+ (0xF944, "M", "籠"),
+ (0xF945, "M", "聾"),
+ (0xF946, "M", "牢"),
+ (0xF947, "M", "磊"),
+ (0xF948, "M", "賂"),
+ (0xF949, "M", "雷"),
+ ]
+
+
+def _seg_40() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xF94A, "M", "壘"),
+ (0xF94B, "M", "屢"),
+ (0xF94C, "M", "樓"),
+ (0xF94D, "M", "淚"),
+ (0xF94E, "M", "漏"),
+ (0xF94F, "M", "累"),
+ (0xF950, "M", "縷"),
+ (0xF951, "M", "陋"),
+ (0xF952, "M", "勒"),
+ (0xF953, "M", "肋"),
+ (0xF954, "M", "凜"),
+ (0xF955, "M", "凌"),
+ (0xF956, "M", "稜"),
+ (0xF957, "M", "綾"),
+ (0xF958, "M", "菱"),
+ (0xF959, "M", "陵"),
+ (0xF95A, "M", "讀"),
+ (0xF95B, "M", "拏"),
+ (0xF95C, "M", "樂"),
+ (0xF95D, "M", "諾"),
+ (0xF95E, "M", "丹"),
+ (0xF95F, "M", "寧"),
+ (0xF960, "M", "怒"),
+ (0xF961, "M", "率"),
+ (0xF962, "M", "異"),
+ (0xF963, "M", "北"),
+ (0xF964, "M", "磻"),
+ (0xF965, "M", "便"),
+ (0xF966, "M", "復"),
+ (0xF967, "M", "不"),
+ (0xF968, "M", "泌"),
+ (0xF969, "M", "數"),
+ (0xF96A, "M", "索"),
+ (0xF96B, "M", "參"),
+ (0xF96C, "M", "塞"),
+ (0xF96D, "M", "省"),
+ (0xF96E, "M", "葉"),
+ (0xF96F, "M", "說"),
+ (0xF970, "M", "殺"),
+ (0xF971, "M", "辰"),
+ (0xF972, "M", "沈"),
+ (0xF973, "M", "拾"),
+ (0xF974, "M", "若"),
+ (0xF975, "M", "掠"),
+ (0xF976, "M", "略"),
+ (0xF977, "M", "亮"),
+ (0xF978, "M", "兩"),
+ (0xF979, "M", "凉"),
+ (0xF97A, "M", "梁"),
+ (0xF97B, "M", "糧"),
+ (0xF97C, "M", "良"),
+ (0xF97D, "M", "諒"),
+ (0xF97E, "M", "量"),
+ (0xF97F, "M", "勵"),
+ (0xF980, "M", "呂"),
+ (0xF981, "M", "女"),
+ (0xF982, "M", "廬"),
+ (0xF983, "M", "旅"),
+ (0xF984, "M", "濾"),
+ (0xF985, "M", "礪"),
+ (0xF986, "M", "閭"),
+ (0xF987, "M", "驪"),
+ (0xF988, "M", "麗"),
+ (0xF989, "M", "黎"),
+ (0xF98A, "M", "力"),
+ (0xF98B, "M", "曆"),
+ (0xF98C, "M", "歷"),
+ (0xF98D, "M", "轢"),
+ (0xF98E, "M", "年"),
+ (0xF98F, "M", "憐"),
+ (0xF990, "M", "戀"),
+ (0xF991, "M", "撚"),
+ (0xF992, "M", "漣"),
+ (0xF993, "M", "煉"),
+ (0xF994, "M", "璉"),
+ (0xF995, "M", "秊"),
+ (0xF996, "M", "練"),
+ (0xF997, "M", "聯"),
+ (0xF998, "M", "輦"),
+ (0xF999, "M", "蓮"),
+ (0xF99A, "M", "連"),
+ (0xF99B, "M", "鍊"),
+ (0xF99C, "M", "列"),
+ (0xF99D, "M", "劣"),
+ (0xF99E, "M", "咽"),
+ (0xF99F, "M", "烈"),
+ (0xF9A0, "M", "裂"),
+ (0xF9A1, "M", "說"),
+ (0xF9A2, "M", "廉"),
+ (0xF9A3, "M", "念"),
+ (0xF9A4, "M", "捻"),
+ (0xF9A5, "M", "殮"),
+ (0xF9A6, "M", "簾"),
+ (0xF9A7, "M", "獵"),
+ (0xF9A8, "M", "令"),
+ (0xF9A9, "M", "囹"),
+ (0xF9AA, "M", "寧"),
+ (0xF9AB, "M", "嶺"),
+ (0xF9AC, "M", "怜"),
+ (0xF9AD, "M", "玲"),
+ ]
+
+
+def _seg_41() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xF9AE, "M", "瑩"),
+ (0xF9AF, "M", "羚"),
+ (0xF9B0, "M", "聆"),
+ (0xF9B1, "M", "鈴"),
+ (0xF9B2, "M", "零"),
+ (0xF9B3, "M", "靈"),
+ (0xF9B4, "M", "領"),
+ (0xF9B5, "M", "例"),
+ (0xF9B6, "M", "禮"),
+ (0xF9B7, "M", "醴"),
+ (0xF9B8, "M", "隸"),
+ (0xF9B9, "M", "惡"),
+ (0xF9BA, "M", "了"),
+ (0xF9BB, "M", "僚"),
+ (0xF9BC, "M", "寮"),
+ (0xF9BD, "M", "尿"),
+ (0xF9BE, "M", "料"),
+ (0xF9BF, "M", "樂"),
+ (0xF9C0, "M", "燎"),
+ (0xF9C1, "M", "療"),
+ (0xF9C2, "M", "蓼"),
+ (0xF9C3, "M", "遼"),
+ (0xF9C4, "M", "龍"),
+ (0xF9C5, "M", "暈"),
+ (0xF9C6, "M", "阮"),
+ (0xF9C7, "M", "劉"),
+ (0xF9C8, "M", "杻"),
+ (0xF9C9, "M", "柳"),
+ (0xF9CA, "M", "流"),
+ (0xF9CB, "M", "溜"),
+ (0xF9CC, "M", "琉"),
+ (0xF9CD, "M", "留"),
+ (0xF9CE, "M", "硫"),
+ (0xF9CF, "M", "紐"),
+ (0xF9D0, "M", "類"),
+ (0xF9D1, "M", "六"),
+ (0xF9D2, "M", "戮"),
+ (0xF9D3, "M", "陸"),
+ (0xF9D4, "M", "倫"),
+ (0xF9D5, "M", "崙"),
+ (0xF9D6, "M", "淪"),
+ (0xF9D7, "M", "輪"),
+ (0xF9D8, "M", "律"),
+ (0xF9D9, "M", "慄"),
+ (0xF9DA, "M", "栗"),
+ (0xF9DB, "M", "率"),
+ (0xF9DC, "M", "隆"),
+ (0xF9DD, "M", "利"),
+ (0xF9DE, "M", "吏"),
+ (0xF9DF, "M", "履"),
+ (0xF9E0, "M", "易"),
+ (0xF9E1, "M", "李"),
+ (0xF9E2, "M", "梨"),
+ (0xF9E3, "M", "泥"),
+ (0xF9E4, "M", "理"),
+ (0xF9E5, "M", "痢"),
+ (0xF9E6, "M", "罹"),
+ (0xF9E7, "M", "裏"),
+ (0xF9E8, "M", "裡"),
+ (0xF9E9, "M", "里"),
+ (0xF9EA, "M", "離"),
+ (0xF9EB, "M", "匿"),
+ (0xF9EC, "M", "溺"),
+ (0xF9ED, "M", "吝"),
+ (0xF9EE, "M", "燐"),
+ (0xF9EF, "M", "璘"),
+ (0xF9F0, "M", "藺"),
+ (0xF9F1, "M", "隣"),
+ (0xF9F2, "M", "鱗"),
+ (0xF9F3, "M", "麟"),
+ (0xF9F4, "M", "林"),
+ (0xF9F5, "M", "淋"),
+ (0xF9F6, "M", "臨"),
+ (0xF9F7, "M", "立"),
+ (0xF9F8, "M", "笠"),
+ (0xF9F9, "M", "粒"),
+ (0xF9FA, "M", "狀"),
+ (0xF9FB, "M", "炙"),
+ (0xF9FC, "M", "識"),
+ (0xF9FD, "M", "什"),
+ (0xF9FE, "M", "茶"),
+ (0xF9FF, "M", "刺"),
+ (0xFA00, "M", "切"),
+ (0xFA01, "M", "度"),
+ (0xFA02, "M", "拓"),
+ (0xFA03, "M", "糖"),
+ (0xFA04, "M", "宅"),
+ (0xFA05, "M", "洞"),
+ (0xFA06, "M", "暴"),
+ (0xFA07, "M", "輻"),
+ (0xFA08, "M", "行"),
+ (0xFA09, "M", "降"),
+ (0xFA0A, "M", "見"),
+ (0xFA0B, "M", "廓"),
+ (0xFA0C, "M", "兀"),
+ (0xFA0D, "M", "嗀"),
+ (0xFA0E, "V"),
+ (0xFA10, "M", "塚"),
+ (0xFA11, "V"),
+ (0xFA12, "M", "晴"),
+ ]
+
+
+def _seg_42() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFA13, "V"),
+ (0xFA15, "M", "凞"),
+ (0xFA16, "M", "猪"),
+ (0xFA17, "M", "益"),
+ (0xFA18, "M", "礼"),
+ (0xFA19, "M", "神"),
+ (0xFA1A, "M", "祥"),
+ (0xFA1B, "M", "福"),
+ (0xFA1C, "M", "靖"),
+ (0xFA1D, "M", "精"),
+ (0xFA1E, "M", "羽"),
+ (0xFA1F, "V"),
+ (0xFA20, "M", "蘒"),
+ (0xFA21, "V"),
+ (0xFA22, "M", "諸"),
+ (0xFA23, "V"),
+ (0xFA25, "M", "逸"),
+ (0xFA26, "M", "都"),
+ (0xFA27, "V"),
+ (0xFA2A, "M", "飯"),
+ (0xFA2B, "M", "飼"),
+ (0xFA2C, "M", "館"),
+ (0xFA2D, "M", "鶴"),
+ (0xFA2E, "M", "郞"),
+ (0xFA2F, "M", "隷"),
+ (0xFA30, "M", "侮"),
+ (0xFA31, "M", "僧"),
+ (0xFA32, "M", "免"),
+ (0xFA33, "M", "勉"),
+ (0xFA34, "M", "勤"),
+ (0xFA35, "M", "卑"),
+ (0xFA36, "M", "喝"),
+ (0xFA37, "M", "嘆"),
+ (0xFA38, "M", "器"),
+ (0xFA39, "M", "塀"),
+ (0xFA3A, "M", "墨"),
+ (0xFA3B, "M", "層"),
+ (0xFA3C, "M", "屮"),
+ (0xFA3D, "M", "悔"),
+ (0xFA3E, "M", "慨"),
+ (0xFA3F, "M", "憎"),
+ (0xFA40, "M", "懲"),
+ (0xFA41, "M", "敏"),
+ (0xFA42, "M", "既"),
+ (0xFA43, "M", "暑"),
+ (0xFA44, "M", "梅"),
+ (0xFA45, "M", "海"),
+ (0xFA46, "M", "渚"),
+ (0xFA47, "M", "漢"),
+ (0xFA48, "M", "煮"),
+ (0xFA49, "M", "爫"),
+ (0xFA4A, "M", "琢"),
+ (0xFA4B, "M", "碑"),
+ (0xFA4C, "M", "社"),
+ (0xFA4D, "M", "祉"),
+ (0xFA4E, "M", "祈"),
+ (0xFA4F, "M", "祐"),
+ (0xFA50, "M", "祖"),
+ (0xFA51, "M", "祝"),
+ (0xFA52, "M", "禍"),
+ (0xFA53, "M", "禎"),
+ (0xFA54, "M", "穀"),
+ (0xFA55, "M", "突"),
+ (0xFA56, "M", "節"),
+ (0xFA57, "M", "練"),
+ (0xFA58, "M", "縉"),
+ (0xFA59, "M", "繁"),
+ (0xFA5A, "M", "署"),
+ (0xFA5B, "M", "者"),
+ (0xFA5C, "M", "臭"),
+ (0xFA5D, "M", "艹"),
+ (0xFA5F, "M", "著"),
+ (0xFA60, "M", "褐"),
+ (0xFA61, "M", "視"),
+ (0xFA62, "M", "謁"),
+ (0xFA63, "M", "謹"),
+ (0xFA64, "M", "賓"),
+ (0xFA65, "M", "贈"),
+ (0xFA66, "M", "辶"),
+ (0xFA67, "M", "逸"),
+ (0xFA68, "M", "難"),
+ (0xFA69, "M", "響"),
+ (0xFA6A, "M", "頻"),
+ (0xFA6B, "M", "恵"),
+ (0xFA6C, "M", "𤋮"),
+ (0xFA6D, "M", "舘"),
+ (0xFA6E, "X"),
+ (0xFA70, "M", "並"),
+ (0xFA71, "M", "况"),
+ (0xFA72, "M", "全"),
+ (0xFA73, "M", "侀"),
+ (0xFA74, "M", "充"),
+ (0xFA75, "M", "冀"),
+ (0xFA76, "M", "勇"),
+ (0xFA77, "M", "勺"),
+ (0xFA78, "M", "喝"),
+ (0xFA79, "M", "啕"),
+ (0xFA7A, "M", "喙"),
+ (0xFA7B, "M", "嗢"),
+ (0xFA7C, "M", "塚"),
+ ]
+
+
+def _seg_43() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFA7D, "M", "墳"),
+ (0xFA7E, "M", "奄"),
+ (0xFA7F, "M", "奔"),
+ (0xFA80, "M", "婢"),
+ (0xFA81, "M", "嬨"),
+ (0xFA82, "M", "廒"),
+ (0xFA83, "M", "廙"),
+ (0xFA84, "M", "彩"),
+ (0xFA85, "M", "徭"),
+ (0xFA86, "M", "惘"),
+ (0xFA87, "M", "慎"),
+ (0xFA88, "M", "愈"),
+ (0xFA89, "M", "憎"),
+ (0xFA8A, "M", "慠"),
+ (0xFA8B, "M", "懲"),
+ (0xFA8C, "M", "戴"),
+ (0xFA8D, "M", "揄"),
+ (0xFA8E, "M", "搜"),
+ (0xFA8F, "M", "摒"),
+ (0xFA90, "M", "敖"),
+ (0xFA91, "M", "晴"),
+ (0xFA92, "M", "朗"),
+ (0xFA93, "M", "望"),
+ (0xFA94, "M", "杖"),
+ (0xFA95, "M", "歹"),
+ (0xFA96, "M", "殺"),
+ (0xFA97, "M", "流"),
+ (0xFA98, "M", "滛"),
+ (0xFA99, "M", "滋"),
+ (0xFA9A, "M", "漢"),
+ (0xFA9B, "M", "瀞"),
+ (0xFA9C, "M", "煮"),
+ (0xFA9D, "M", "瞧"),
+ (0xFA9E, "M", "爵"),
+ (0xFA9F, "M", "犯"),
+ (0xFAA0, "M", "猪"),
+ (0xFAA1, "M", "瑱"),
+ (0xFAA2, "M", "甆"),
+ (0xFAA3, "M", "画"),
+ (0xFAA4, "M", "瘝"),
+ (0xFAA5, "M", "瘟"),
+ (0xFAA6, "M", "益"),
+ (0xFAA7, "M", "盛"),
+ (0xFAA8, "M", "直"),
+ (0xFAA9, "M", "睊"),
+ (0xFAAA, "M", "着"),
+ (0xFAAB, "M", "磌"),
+ (0xFAAC, "M", "窱"),
+ (0xFAAD, "M", "節"),
+ (0xFAAE, "M", "类"),
+ (0xFAAF, "M", "絛"),
+ (0xFAB0, "M", "練"),
+ (0xFAB1, "M", "缾"),
+ (0xFAB2, "M", "者"),
+ (0xFAB3, "M", "荒"),
+ (0xFAB4, "M", "華"),
+ (0xFAB5, "M", "蝹"),
+ (0xFAB6, "M", "襁"),
+ (0xFAB7, "M", "覆"),
+ (0xFAB8, "M", "視"),
+ (0xFAB9, "M", "調"),
+ (0xFABA, "M", "諸"),
+ (0xFABB, "M", "請"),
+ (0xFABC, "M", "謁"),
+ (0xFABD, "M", "諾"),
+ (0xFABE, "M", "諭"),
+ (0xFABF, "M", "謹"),
+ (0xFAC0, "M", "變"),
+ (0xFAC1, "M", "贈"),
+ (0xFAC2, "M", "輸"),
+ (0xFAC3, "M", "遲"),
+ (0xFAC4, "M", "醙"),
+ (0xFAC5, "M", "鉶"),
+ (0xFAC6, "M", "陼"),
+ (0xFAC7, "M", "難"),
+ (0xFAC8, "M", "靖"),
+ (0xFAC9, "M", "韛"),
+ (0xFACA, "M", "響"),
+ (0xFACB, "M", "頋"),
+ (0xFACC, "M", "頻"),
+ (0xFACD, "M", "鬒"),
+ (0xFACE, "M", "龜"),
+ (0xFACF, "M", "𢡊"),
+ (0xFAD0, "M", "𢡄"),
+ (0xFAD1, "M", "𣏕"),
+ (0xFAD2, "M", "㮝"),
+ (0xFAD3, "M", "䀘"),
+ (0xFAD4, "M", "䀹"),
+ (0xFAD5, "M", "𥉉"),
+ (0xFAD6, "M", "𥳐"),
+ (0xFAD7, "M", "𧻓"),
+ (0xFAD8, "M", "齃"),
+ (0xFAD9, "M", "龎"),
+ (0xFADA, "X"),
+ (0xFB00, "M", "ff"),
+ (0xFB01, "M", "fi"),
+ (0xFB02, "M", "fl"),
+ (0xFB03, "M", "ffi"),
+ (0xFB04, "M", "ffl"),
+ (0xFB05, "M", "st"),
+ ]
+
+
+def _seg_44() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFB07, "X"),
+ (0xFB13, "M", "մն"),
+ (0xFB14, "M", "մե"),
+ (0xFB15, "M", "մի"),
+ (0xFB16, "M", "վն"),
+ (0xFB17, "M", "մխ"),
+ (0xFB18, "X"),
+ (0xFB1D, "M", "יִ"),
+ (0xFB1E, "V"),
+ (0xFB1F, "M", "ײַ"),
+ (0xFB20, "M", "ע"),
+ (0xFB21, "M", "א"),
+ (0xFB22, "M", "ד"),
+ (0xFB23, "M", "ה"),
+ (0xFB24, "M", "כ"),
+ (0xFB25, "M", "ל"),
+ (0xFB26, "M", "ם"),
+ (0xFB27, "M", "ר"),
+ (0xFB28, "M", "ת"),
+ (0xFB29, "3", "+"),
+ (0xFB2A, "M", "שׁ"),
+ (0xFB2B, "M", "שׂ"),
+ (0xFB2C, "M", "שּׁ"),
+ (0xFB2D, "M", "שּׂ"),
+ (0xFB2E, "M", "אַ"),
+ (0xFB2F, "M", "אָ"),
+ (0xFB30, "M", "אּ"),
+ (0xFB31, "M", "בּ"),
+ (0xFB32, "M", "גּ"),
+ (0xFB33, "M", "דּ"),
+ (0xFB34, "M", "הּ"),
+ (0xFB35, "M", "וּ"),
+ (0xFB36, "M", "זּ"),
+ (0xFB37, "X"),
+ (0xFB38, "M", "טּ"),
+ (0xFB39, "M", "יּ"),
+ (0xFB3A, "M", "ךּ"),
+ (0xFB3B, "M", "כּ"),
+ (0xFB3C, "M", "לּ"),
+ (0xFB3D, "X"),
+ (0xFB3E, "M", "מּ"),
+ (0xFB3F, "X"),
+ (0xFB40, "M", "נּ"),
+ (0xFB41, "M", "סּ"),
+ (0xFB42, "X"),
+ (0xFB43, "M", "ףּ"),
+ (0xFB44, "M", "פּ"),
+ (0xFB45, "X"),
+ (0xFB46, "M", "צּ"),
+ (0xFB47, "M", "קּ"),
+ (0xFB48, "M", "רּ"),
+ (0xFB49, "M", "שּ"),
+ (0xFB4A, "M", "תּ"),
+ (0xFB4B, "M", "וֹ"),
+ (0xFB4C, "M", "בֿ"),
+ (0xFB4D, "M", "כֿ"),
+ (0xFB4E, "M", "פֿ"),
+ (0xFB4F, "M", "אל"),
+ (0xFB50, "M", "ٱ"),
+ (0xFB52, "M", "ٻ"),
+ (0xFB56, "M", "پ"),
+ (0xFB5A, "M", "ڀ"),
+ (0xFB5E, "M", "ٺ"),
+ (0xFB62, "M", "ٿ"),
+ (0xFB66, "M", "ٹ"),
+ (0xFB6A, "M", "ڤ"),
+ (0xFB6E, "M", "ڦ"),
+ (0xFB72, "M", "ڄ"),
+ (0xFB76, "M", "ڃ"),
+ (0xFB7A, "M", "چ"),
+ (0xFB7E, "M", "ڇ"),
+ (0xFB82, "M", "ڍ"),
+ (0xFB84, "M", "ڌ"),
+ (0xFB86, "M", "ڎ"),
+ (0xFB88, "M", "ڈ"),
+ (0xFB8A, "M", "ژ"),
+ (0xFB8C, "M", "ڑ"),
+ (0xFB8E, "M", "ک"),
+ (0xFB92, "M", "گ"),
+ (0xFB96, "M", "ڳ"),
+ (0xFB9A, "M", "ڱ"),
+ (0xFB9E, "M", "ں"),
+ (0xFBA0, "M", "ڻ"),
+ (0xFBA4, "M", "ۀ"),
+ (0xFBA6, "M", "ہ"),
+ (0xFBAA, "M", "ھ"),
+ (0xFBAE, "M", "ے"),
+ (0xFBB0, "M", "ۓ"),
+ (0xFBB2, "V"),
+ (0xFBC3, "X"),
+ (0xFBD3, "M", "ڭ"),
+ (0xFBD7, "M", "ۇ"),
+ (0xFBD9, "M", "ۆ"),
+ (0xFBDB, "M", "ۈ"),
+ (0xFBDD, "M", "ۇٴ"),
+ (0xFBDE, "M", "ۋ"),
+ (0xFBE0, "M", "ۅ"),
+ (0xFBE2, "M", "ۉ"),
+ (0xFBE4, "M", "ې"),
+ (0xFBE8, "M", "ى"),
+ ]
+
+
+def _seg_45() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFBEA, "M", "ئا"),
+ (0xFBEC, "M", "ئە"),
+ (0xFBEE, "M", "ئو"),
+ (0xFBF0, "M", "ئۇ"),
+ (0xFBF2, "M", "ئۆ"),
+ (0xFBF4, "M", "ئۈ"),
+ (0xFBF6, "M", "ئې"),
+ (0xFBF9, "M", "ئى"),
+ (0xFBFC, "M", "ی"),
+ (0xFC00, "M", "ئج"),
+ (0xFC01, "M", "ئح"),
+ (0xFC02, "M", "ئم"),
+ (0xFC03, "M", "ئى"),
+ (0xFC04, "M", "ئي"),
+ (0xFC05, "M", "بج"),
+ (0xFC06, "M", "بح"),
+ (0xFC07, "M", "بخ"),
+ (0xFC08, "M", "بم"),
+ (0xFC09, "M", "بى"),
+ (0xFC0A, "M", "بي"),
+ (0xFC0B, "M", "تج"),
+ (0xFC0C, "M", "تح"),
+ (0xFC0D, "M", "تخ"),
+ (0xFC0E, "M", "تم"),
+ (0xFC0F, "M", "تى"),
+ (0xFC10, "M", "تي"),
+ (0xFC11, "M", "ثج"),
+ (0xFC12, "M", "ثم"),
+ (0xFC13, "M", "ثى"),
+ (0xFC14, "M", "ثي"),
+ (0xFC15, "M", "جح"),
+ (0xFC16, "M", "جم"),
+ (0xFC17, "M", "حج"),
+ (0xFC18, "M", "حم"),
+ (0xFC19, "M", "خج"),
+ (0xFC1A, "M", "خح"),
+ (0xFC1B, "M", "خم"),
+ (0xFC1C, "M", "سج"),
+ (0xFC1D, "M", "سح"),
+ (0xFC1E, "M", "سخ"),
+ (0xFC1F, "M", "سم"),
+ (0xFC20, "M", "صح"),
+ (0xFC21, "M", "صم"),
+ (0xFC22, "M", "ضج"),
+ (0xFC23, "M", "ضح"),
+ (0xFC24, "M", "ضخ"),
+ (0xFC25, "M", "ضم"),
+ (0xFC26, "M", "طح"),
+ (0xFC27, "M", "طم"),
+ (0xFC28, "M", "ظم"),
+ (0xFC29, "M", "عج"),
+ (0xFC2A, "M", "عم"),
+ (0xFC2B, "M", "غج"),
+ (0xFC2C, "M", "غم"),
+ (0xFC2D, "M", "فج"),
+ (0xFC2E, "M", "فح"),
+ (0xFC2F, "M", "فخ"),
+ (0xFC30, "M", "فم"),
+ (0xFC31, "M", "فى"),
+ (0xFC32, "M", "في"),
+ (0xFC33, "M", "قح"),
+ (0xFC34, "M", "قم"),
+ (0xFC35, "M", "قى"),
+ (0xFC36, "M", "قي"),
+ (0xFC37, "M", "كا"),
+ (0xFC38, "M", "كج"),
+ (0xFC39, "M", "كح"),
+ (0xFC3A, "M", "كخ"),
+ (0xFC3B, "M", "كل"),
+ (0xFC3C, "M", "كم"),
+ (0xFC3D, "M", "كى"),
+ (0xFC3E, "M", "كي"),
+ (0xFC3F, "M", "لج"),
+ (0xFC40, "M", "لح"),
+ (0xFC41, "M", "لخ"),
+ (0xFC42, "M", "لم"),
+ (0xFC43, "M", "لى"),
+ (0xFC44, "M", "لي"),
+ (0xFC45, "M", "مج"),
+ (0xFC46, "M", "مح"),
+ (0xFC47, "M", "مخ"),
+ (0xFC48, "M", "مم"),
+ (0xFC49, "M", "مى"),
+ (0xFC4A, "M", "مي"),
+ (0xFC4B, "M", "نج"),
+ (0xFC4C, "M", "نح"),
+ (0xFC4D, "M", "نخ"),
+ (0xFC4E, "M", "نم"),
+ (0xFC4F, "M", "نى"),
+ (0xFC50, "M", "ني"),
+ (0xFC51, "M", "هج"),
+ (0xFC52, "M", "هم"),
+ (0xFC53, "M", "هى"),
+ (0xFC54, "M", "هي"),
+ (0xFC55, "M", "يج"),
+ (0xFC56, "M", "يح"),
+ (0xFC57, "M", "يخ"),
+ (0xFC58, "M", "يم"),
+ (0xFC59, "M", "يى"),
+ (0xFC5A, "M", "يي"),
+ ]
+
+
+def _seg_46() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFC5B, "M", "ذٰ"),
+ (0xFC5C, "M", "رٰ"),
+ (0xFC5D, "M", "ىٰ"),
+ (0xFC5E, "3", " ٌّ"),
+ (0xFC5F, "3", " ٍّ"),
+ (0xFC60, "3", " َّ"),
+ (0xFC61, "3", " ُّ"),
+ (0xFC62, "3", " ِّ"),
+ (0xFC63, "3", " ّٰ"),
+ (0xFC64, "M", "ئر"),
+ (0xFC65, "M", "ئز"),
+ (0xFC66, "M", "ئم"),
+ (0xFC67, "M", "ئن"),
+ (0xFC68, "M", "ئى"),
+ (0xFC69, "M", "ئي"),
+ (0xFC6A, "M", "بر"),
+ (0xFC6B, "M", "بز"),
+ (0xFC6C, "M", "بم"),
+ (0xFC6D, "M", "بن"),
+ (0xFC6E, "M", "بى"),
+ (0xFC6F, "M", "بي"),
+ (0xFC70, "M", "تر"),
+ (0xFC71, "M", "تز"),
+ (0xFC72, "M", "تم"),
+ (0xFC73, "M", "تن"),
+ (0xFC74, "M", "تى"),
+ (0xFC75, "M", "تي"),
+ (0xFC76, "M", "ثر"),
+ (0xFC77, "M", "ثز"),
+ (0xFC78, "M", "ثم"),
+ (0xFC79, "M", "ثن"),
+ (0xFC7A, "M", "ثى"),
+ (0xFC7B, "M", "ثي"),
+ (0xFC7C, "M", "فى"),
+ (0xFC7D, "M", "في"),
+ (0xFC7E, "M", "قى"),
+ (0xFC7F, "M", "قي"),
+ (0xFC80, "M", "كا"),
+ (0xFC81, "M", "كل"),
+ (0xFC82, "M", "كم"),
+ (0xFC83, "M", "كى"),
+ (0xFC84, "M", "كي"),
+ (0xFC85, "M", "لم"),
+ (0xFC86, "M", "لى"),
+ (0xFC87, "M", "لي"),
+ (0xFC88, "M", "ما"),
+ (0xFC89, "M", "مم"),
+ (0xFC8A, "M", "نر"),
+ (0xFC8B, "M", "نز"),
+ (0xFC8C, "M", "نم"),
+ (0xFC8D, "M", "نن"),
+ (0xFC8E, "M", "نى"),
+ (0xFC8F, "M", "ني"),
+ (0xFC90, "M", "ىٰ"),
+ (0xFC91, "M", "ير"),
+ (0xFC92, "M", "يز"),
+ (0xFC93, "M", "يم"),
+ (0xFC94, "M", "ين"),
+ (0xFC95, "M", "يى"),
+ (0xFC96, "M", "يي"),
+ (0xFC97, "M", "ئج"),
+ (0xFC98, "M", "ئح"),
+ (0xFC99, "M", "ئخ"),
+ (0xFC9A, "M", "ئم"),
+ (0xFC9B, "M", "ئه"),
+ (0xFC9C, "M", "بج"),
+ (0xFC9D, "M", "بح"),
+ (0xFC9E, "M", "بخ"),
+ (0xFC9F, "M", "بم"),
+ (0xFCA0, "M", "به"),
+ (0xFCA1, "M", "تج"),
+ (0xFCA2, "M", "تح"),
+ (0xFCA3, "M", "تخ"),
+ (0xFCA4, "M", "تم"),
+ (0xFCA5, "M", "ته"),
+ (0xFCA6, "M", "ثم"),
+ (0xFCA7, "M", "جح"),
+ (0xFCA8, "M", "جم"),
+ (0xFCA9, "M", "حج"),
+ (0xFCAA, "M", "حم"),
+ (0xFCAB, "M", "خج"),
+ (0xFCAC, "M", "خم"),
+ (0xFCAD, "M", "سج"),
+ (0xFCAE, "M", "سح"),
+ (0xFCAF, "M", "سخ"),
+ (0xFCB0, "M", "سم"),
+ (0xFCB1, "M", "صح"),
+ (0xFCB2, "M", "صخ"),
+ (0xFCB3, "M", "صم"),
+ (0xFCB4, "M", "ضج"),
+ (0xFCB5, "M", "ضح"),
+ (0xFCB6, "M", "ضخ"),
+ (0xFCB7, "M", "ضم"),
+ (0xFCB8, "M", "طح"),
+ (0xFCB9, "M", "ظم"),
+ (0xFCBA, "M", "عج"),
+ (0xFCBB, "M", "عم"),
+ (0xFCBC, "M", "غج"),
+ (0xFCBD, "M", "غم"),
+ (0xFCBE, "M", "فج"),
+ ]
+
+
+def _seg_47() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFCBF, "M", "فح"),
+ (0xFCC0, "M", "فخ"),
+ (0xFCC1, "M", "فم"),
+ (0xFCC2, "M", "قح"),
+ (0xFCC3, "M", "قم"),
+ (0xFCC4, "M", "كج"),
+ (0xFCC5, "M", "كح"),
+ (0xFCC6, "M", "كخ"),
+ (0xFCC7, "M", "كل"),
+ (0xFCC8, "M", "كم"),
+ (0xFCC9, "M", "لج"),
+ (0xFCCA, "M", "لح"),
+ (0xFCCB, "M", "لخ"),
+ (0xFCCC, "M", "لم"),
+ (0xFCCD, "M", "له"),
+ (0xFCCE, "M", "مج"),
+ (0xFCCF, "M", "مح"),
+ (0xFCD0, "M", "مخ"),
+ (0xFCD1, "M", "مم"),
+ (0xFCD2, "M", "نج"),
+ (0xFCD3, "M", "نح"),
+ (0xFCD4, "M", "نخ"),
+ (0xFCD5, "M", "نم"),
+ (0xFCD6, "M", "نه"),
+ (0xFCD7, "M", "هج"),
+ (0xFCD8, "M", "هم"),
+ (0xFCD9, "M", "هٰ"),
+ (0xFCDA, "M", "يج"),
+ (0xFCDB, "M", "يح"),
+ (0xFCDC, "M", "يخ"),
+ (0xFCDD, "M", "يم"),
+ (0xFCDE, "M", "يه"),
+ (0xFCDF, "M", "ئم"),
+ (0xFCE0, "M", "ئه"),
+ (0xFCE1, "M", "بم"),
+ (0xFCE2, "M", "به"),
+ (0xFCE3, "M", "تم"),
+ (0xFCE4, "M", "ته"),
+ (0xFCE5, "M", "ثم"),
+ (0xFCE6, "M", "ثه"),
+ (0xFCE7, "M", "سم"),
+ (0xFCE8, "M", "سه"),
+ (0xFCE9, "M", "شم"),
+ (0xFCEA, "M", "شه"),
+ (0xFCEB, "M", "كل"),
+ (0xFCEC, "M", "كم"),
+ (0xFCED, "M", "لم"),
+ (0xFCEE, "M", "نم"),
+ (0xFCEF, "M", "نه"),
+ (0xFCF0, "M", "يم"),
+ (0xFCF1, "M", "يه"),
+ (0xFCF2, "M", "ـَّ"),
+ (0xFCF3, "M", "ـُّ"),
+ (0xFCF4, "M", "ـِّ"),
+ (0xFCF5, "M", "طى"),
+ (0xFCF6, "M", "طي"),
+ (0xFCF7, "M", "عى"),
+ (0xFCF8, "M", "عي"),
+ (0xFCF9, "M", "غى"),
+ (0xFCFA, "M", "غي"),
+ (0xFCFB, "M", "سى"),
+ (0xFCFC, "M", "سي"),
+ (0xFCFD, "M", "شى"),
+ (0xFCFE, "M", "شي"),
+ (0xFCFF, "M", "حى"),
+ (0xFD00, "M", "حي"),
+ (0xFD01, "M", "جى"),
+ (0xFD02, "M", "جي"),
+ (0xFD03, "M", "خى"),
+ (0xFD04, "M", "خي"),
+ (0xFD05, "M", "صى"),
+ (0xFD06, "M", "صي"),
+ (0xFD07, "M", "ضى"),
+ (0xFD08, "M", "ضي"),
+ (0xFD09, "M", "شج"),
+ (0xFD0A, "M", "شح"),
+ (0xFD0B, "M", "شخ"),
+ (0xFD0C, "M", "شم"),
+ (0xFD0D, "M", "شر"),
+ (0xFD0E, "M", "سر"),
+ (0xFD0F, "M", "صر"),
+ (0xFD10, "M", "ضر"),
+ (0xFD11, "M", "طى"),
+ (0xFD12, "M", "طي"),
+ (0xFD13, "M", "عى"),
+ (0xFD14, "M", "عي"),
+ (0xFD15, "M", "غى"),
+ (0xFD16, "M", "غي"),
+ (0xFD17, "M", "سى"),
+ (0xFD18, "M", "سي"),
+ (0xFD19, "M", "شى"),
+ (0xFD1A, "M", "شي"),
+ (0xFD1B, "M", "حى"),
+ (0xFD1C, "M", "حي"),
+ (0xFD1D, "M", "جى"),
+ (0xFD1E, "M", "جي"),
+ (0xFD1F, "M", "خى"),
+ (0xFD20, "M", "خي"),
+ (0xFD21, "M", "صى"),
+ (0xFD22, "M", "صي"),
+ ]
+
+
+def _seg_48() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFD23, "M", "ضى"),
+ (0xFD24, "M", "ضي"),
+ (0xFD25, "M", "شج"),
+ (0xFD26, "M", "شح"),
+ (0xFD27, "M", "شخ"),
+ (0xFD28, "M", "شم"),
+ (0xFD29, "M", "شر"),
+ (0xFD2A, "M", "سر"),
+ (0xFD2B, "M", "صر"),
+ (0xFD2C, "M", "ضر"),
+ (0xFD2D, "M", "شج"),
+ (0xFD2E, "M", "شح"),
+ (0xFD2F, "M", "شخ"),
+ (0xFD30, "M", "شم"),
+ (0xFD31, "M", "سه"),
+ (0xFD32, "M", "شه"),
+ (0xFD33, "M", "طم"),
+ (0xFD34, "M", "سج"),
+ (0xFD35, "M", "سح"),
+ (0xFD36, "M", "سخ"),
+ (0xFD37, "M", "شج"),
+ (0xFD38, "M", "شح"),
+ (0xFD39, "M", "شخ"),
+ (0xFD3A, "M", "طم"),
+ (0xFD3B, "M", "ظم"),
+ (0xFD3C, "M", "اً"),
+ (0xFD3E, "V"),
+ (0xFD50, "M", "تجم"),
+ (0xFD51, "M", "تحج"),
+ (0xFD53, "M", "تحم"),
+ (0xFD54, "M", "تخم"),
+ (0xFD55, "M", "تمج"),
+ (0xFD56, "M", "تمح"),
+ (0xFD57, "M", "تمخ"),
+ (0xFD58, "M", "جمح"),
+ (0xFD5A, "M", "حمي"),
+ (0xFD5B, "M", "حمى"),
+ (0xFD5C, "M", "سحج"),
+ (0xFD5D, "M", "سجح"),
+ (0xFD5E, "M", "سجى"),
+ (0xFD5F, "M", "سمح"),
+ (0xFD61, "M", "سمج"),
+ (0xFD62, "M", "سمم"),
+ (0xFD64, "M", "صحح"),
+ (0xFD66, "M", "صمم"),
+ (0xFD67, "M", "شحم"),
+ (0xFD69, "M", "شجي"),
+ (0xFD6A, "M", "شمخ"),
+ (0xFD6C, "M", "شمم"),
+ (0xFD6E, "M", "ضحى"),
+ (0xFD6F, "M", "ضخم"),
+ (0xFD71, "M", "طمح"),
+ (0xFD73, "M", "طمم"),
+ (0xFD74, "M", "طمي"),
+ (0xFD75, "M", "عجم"),
+ (0xFD76, "M", "عمم"),
+ (0xFD78, "M", "عمى"),
+ (0xFD79, "M", "غمم"),
+ (0xFD7A, "M", "غمي"),
+ (0xFD7B, "M", "غمى"),
+ (0xFD7C, "M", "فخم"),
+ (0xFD7E, "M", "قمح"),
+ (0xFD7F, "M", "قمم"),
+ (0xFD80, "M", "لحم"),
+ (0xFD81, "M", "لحي"),
+ (0xFD82, "M", "لحى"),
+ (0xFD83, "M", "لجج"),
+ (0xFD85, "M", "لخم"),
+ (0xFD87, "M", "لمح"),
+ (0xFD89, "M", "محج"),
+ (0xFD8A, "M", "محم"),
+ (0xFD8B, "M", "محي"),
+ (0xFD8C, "M", "مجح"),
+ (0xFD8D, "M", "مجم"),
+ (0xFD8E, "M", "مخج"),
+ (0xFD8F, "M", "مخم"),
+ (0xFD90, "X"),
+ (0xFD92, "M", "مجخ"),
+ (0xFD93, "M", "همج"),
+ (0xFD94, "M", "همم"),
+ (0xFD95, "M", "نحم"),
+ (0xFD96, "M", "نحى"),
+ (0xFD97, "M", "نجم"),
+ (0xFD99, "M", "نجى"),
+ (0xFD9A, "M", "نمي"),
+ (0xFD9B, "M", "نمى"),
+ (0xFD9C, "M", "يمم"),
+ (0xFD9E, "M", "بخي"),
+ (0xFD9F, "M", "تجي"),
+ (0xFDA0, "M", "تجى"),
+ (0xFDA1, "M", "تخي"),
+ (0xFDA2, "M", "تخى"),
+ (0xFDA3, "M", "تمي"),
+ (0xFDA4, "M", "تمى"),
+ (0xFDA5, "M", "جمي"),
+ (0xFDA6, "M", "جحى"),
+ (0xFDA7, "M", "جمى"),
+ (0xFDA8, "M", "سخى"),
+ (0xFDA9, "M", "صحي"),
+ (0xFDAA, "M", "شحي"),
+ ]
+
+
+def _seg_49() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFDAB, "M", "ضحي"),
+ (0xFDAC, "M", "لجي"),
+ (0xFDAD, "M", "لمي"),
+ (0xFDAE, "M", "يحي"),
+ (0xFDAF, "M", "يجي"),
+ (0xFDB0, "M", "يمي"),
+ (0xFDB1, "M", "ممي"),
+ (0xFDB2, "M", "قمي"),
+ (0xFDB3, "M", "نحي"),
+ (0xFDB4, "M", "قمح"),
+ (0xFDB5, "M", "لحم"),
+ (0xFDB6, "M", "عمي"),
+ (0xFDB7, "M", "كمي"),
+ (0xFDB8, "M", "نجح"),
+ (0xFDB9, "M", "مخي"),
+ (0xFDBA, "M", "لجم"),
+ (0xFDBB, "M", "كمم"),
+ (0xFDBC, "M", "لجم"),
+ (0xFDBD, "M", "نجح"),
+ (0xFDBE, "M", "جحي"),
+ (0xFDBF, "M", "حجي"),
+ (0xFDC0, "M", "مجي"),
+ (0xFDC1, "M", "فمي"),
+ (0xFDC2, "M", "بحي"),
+ (0xFDC3, "M", "كمم"),
+ (0xFDC4, "M", "عجم"),
+ (0xFDC5, "M", "صمم"),
+ (0xFDC6, "M", "سخي"),
+ (0xFDC7, "M", "نجي"),
+ (0xFDC8, "X"),
+ (0xFDCF, "V"),
+ (0xFDD0, "X"),
+ (0xFDF0, "M", "صلے"),
+ (0xFDF1, "M", "قلے"),
+ (0xFDF2, "M", "الله"),
+ (0xFDF3, "M", "اكبر"),
+ (0xFDF4, "M", "محمد"),
+ (0xFDF5, "M", "صلعم"),
+ (0xFDF6, "M", "رسول"),
+ (0xFDF7, "M", "عليه"),
+ (0xFDF8, "M", "وسلم"),
+ (0xFDF9, "M", "صلى"),
+ (0xFDFA, "3", "صلى الله عليه وسلم"),
+ (0xFDFB, "3", "جل جلاله"),
+ (0xFDFC, "M", "ریال"),
+ (0xFDFD, "V"),
+ (0xFE00, "I"),
+ (0xFE10, "3", ","),
+ (0xFE11, "M", "、"),
+ (0xFE12, "X"),
+ (0xFE13, "3", ":"),
+ (0xFE14, "3", ";"),
+ (0xFE15, "3", "!"),
+ (0xFE16, "3", "?"),
+ (0xFE17, "M", "〖"),
+ (0xFE18, "M", "〗"),
+ (0xFE19, "X"),
+ (0xFE20, "V"),
+ (0xFE30, "X"),
+ (0xFE31, "M", "—"),
+ (0xFE32, "M", "–"),
+ (0xFE33, "3", "_"),
+ (0xFE35, "3", "("),
+ (0xFE36, "3", ")"),
+ (0xFE37, "3", "{"),
+ (0xFE38, "3", "}"),
+ (0xFE39, "M", "〔"),
+ (0xFE3A, "M", "〕"),
+ (0xFE3B, "M", "【"),
+ (0xFE3C, "M", "】"),
+ (0xFE3D, "M", "《"),
+ (0xFE3E, "M", "》"),
+ (0xFE3F, "M", "〈"),
+ (0xFE40, "M", "〉"),
+ (0xFE41, "M", "「"),
+ (0xFE42, "M", "」"),
+ (0xFE43, "M", "『"),
+ (0xFE44, "M", "』"),
+ (0xFE45, "V"),
+ (0xFE47, "3", "["),
+ (0xFE48, "3", "]"),
+ (0xFE49, "3", " ̅"),
+ (0xFE4D, "3", "_"),
+ (0xFE50, "3", ","),
+ (0xFE51, "M", "、"),
+ (0xFE52, "X"),
+ (0xFE54, "3", ";"),
+ (0xFE55, "3", ":"),
+ (0xFE56, "3", "?"),
+ (0xFE57, "3", "!"),
+ (0xFE58, "M", "—"),
+ (0xFE59, "3", "("),
+ (0xFE5A, "3", ")"),
+ (0xFE5B, "3", "{"),
+ (0xFE5C, "3", "}"),
+ (0xFE5D, "M", "〔"),
+ (0xFE5E, "M", "〕"),
+ (0xFE5F, "3", "#"),
+ (0xFE60, "3", "&"),
+ (0xFE61, "3", "*"),
+ ]
+
+
+def _seg_50() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFE62, "3", "+"),
+ (0xFE63, "M", "-"),
+ (0xFE64, "3", "<"),
+ (0xFE65, "3", ">"),
+ (0xFE66, "3", "="),
+ (0xFE67, "X"),
+ (0xFE68, "3", "\\"),
+ (0xFE69, "3", "$"),
+ (0xFE6A, "3", "%"),
+ (0xFE6B, "3", "@"),
+ (0xFE6C, "X"),
+ (0xFE70, "3", " ً"),
+ (0xFE71, "M", "ـً"),
+ (0xFE72, "3", " ٌ"),
+ (0xFE73, "V"),
+ (0xFE74, "3", " ٍ"),
+ (0xFE75, "X"),
+ (0xFE76, "3", " َ"),
+ (0xFE77, "M", "ـَ"),
+ (0xFE78, "3", " ُ"),
+ (0xFE79, "M", "ـُ"),
+ (0xFE7A, "3", " ِ"),
+ (0xFE7B, "M", "ـِ"),
+ (0xFE7C, "3", " ّ"),
+ (0xFE7D, "M", "ـّ"),
+ (0xFE7E, "3", " ْ"),
+ (0xFE7F, "M", "ـْ"),
+ (0xFE80, "M", "ء"),
+ (0xFE81, "M", "آ"),
+ (0xFE83, "M", "أ"),
+ (0xFE85, "M", "ؤ"),
+ (0xFE87, "M", "إ"),
+ (0xFE89, "M", "ئ"),
+ (0xFE8D, "M", "ا"),
+ (0xFE8F, "M", "ب"),
+ (0xFE93, "M", "ة"),
+ (0xFE95, "M", "ت"),
+ (0xFE99, "M", "ث"),
+ (0xFE9D, "M", "ج"),
+ (0xFEA1, "M", "ح"),
+ (0xFEA5, "M", "خ"),
+ (0xFEA9, "M", "د"),
+ (0xFEAB, "M", "ذ"),
+ (0xFEAD, "M", "ر"),
+ (0xFEAF, "M", "ز"),
+ (0xFEB1, "M", "س"),
+ (0xFEB5, "M", "ش"),
+ (0xFEB9, "M", "ص"),
+ (0xFEBD, "M", "ض"),
+ (0xFEC1, "M", "ط"),
+ (0xFEC5, "M", "ظ"),
+ (0xFEC9, "M", "ع"),
+ (0xFECD, "M", "غ"),
+ (0xFED1, "M", "ف"),
+ (0xFED5, "M", "ق"),
+ (0xFED9, "M", "ك"),
+ (0xFEDD, "M", "ل"),
+ (0xFEE1, "M", "م"),
+ (0xFEE5, "M", "ن"),
+ (0xFEE9, "M", "ه"),
+ (0xFEED, "M", "و"),
+ (0xFEEF, "M", "ى"),
+ (0xFEF1, "M", "ي"),
+ (0xFEF5, "M", "لآ"),
+ (0xFEF7, "M", "لأ"),
+ (0xFEF9, "M", "لإ"),
+ (0xFEFB, "M", "لا"),
+ (0xFEFD, "X"),
+ (0xFEFF, "I"),
+ (0xFF00, "X"),
+ (0xFF01, "3", "!"),
+ (0xFF02, "3", '"'),
+ (0xFF03, "3", "#"),
+ (0xFF04, "3", "$"),
+ (0xFF05, "3", "%"),
+ (0xFF06, "3", "&"),
+ (0xFF07, "3", "'"),
+ (0xFF08, "3", "("),
+ (0xFF09, "3", ")"),
+ (0xFF0A, "3", "*"),
+ (0xFF0B, "3", "+"),
+ (0xFF0C, "3", ","),
+ (0xFF0D, "M", "-"),
+ (0xFF0E, "M", "."),
+ (0xFF0F, "3", "/"),
+ (0xFF10, "M", "0"),
+ (0xFF11, "M", "1"),
+ (0xFF12, "M", "2"),
+ (0xFF13, "M", "3"),
+ (0xFF14, "M", "4"),
+ (0xFF15, "M", "5"),
+ (0xFF16, "M", "6"),
+ (0xFF17, "M", "7"),
+ (0xFF18, "M", "8"),
+ (0xFF19, "M", "9"),
+ (0xFF1A, "3", ":"),
+ (0xFF1B, "3", ";"),
+ (0xFF1C, "3", "<"),
+ (0xFF1D, "3", "="),
+ (0xFF1E, "3", ">"),
+ ]
+
+
+def _seg_51() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFF1F, "3", "?"),
+ (0xFF20, "3", "@"),
+ (0xFF21, "M", "a"),
+ (0xFF22, "M", "b"),
+ (0xFF23, "M", "c"),
+ (0xFF24, "M", "d"),
+ (0xFF25, "M", "e"),
+ (0xFF26, "M", "f"),
+ (0xFF27, "M", "g"),
+ (0xFF28, "M", "h"),
+ (0xFF29, "M", "i"),
+ (0xFF2A, "M", "j"),
+ (0xFF2B, "M", "k"),
+ (0xFF2C, "M", "l"),
+ (0xFF2D, "M", "m"),
+ (0xFF2E, "M", "n"),
+ (0xFF2F, "M", "o"),
+ (0xFF30, "M", "p"),
+ (0xFF31, "M", "q"),
+ (0xFF32, "M", "r"),
+ (0xFF33, "M", "s"),
+ (0xFF34, "M", "t"),
+ (0xFF35, "M", "u"),
+ (0xFF36, "M", "v"),
+ (0xFF37, "M", "w"),
+ (0xFF38, "M", "x"),
+ (0xFF39, "M", "y"),
+ (0xFF3A, "M", "z"),
+ (0xFF3B, "3", "["),
+ (0xFF3C, "3", "\\"),
+ (0xFF3D, "3", "]"),
+ (0xFF3E, "3", "^"),
+ (0xFF3F, "3", "_"),
+ (0xFF40, "3", "`"),
+ (0xFF41, "M", "a"),
+ (0xFF42, "M", "b"),
+ (0xFF43, "M", "c"),
+ (0xFF44, "M", "d"),
+ (0xFF45, "M", "e"),
+ (0xFF46, "M", "f"),
+ (0xFF47, "M", "g"),
+ (0xFF48, "M", "h"),
+ (0xFF49, "M", "i"),
+ (0xFF4A, "M", "j"),
+ (0xFF4B, "M", "k"),
+ (0xFF4C, "M", "l"),
+ (0xFF4D, "M", "m"),
+ (0xFF4E, "M", "n"),
+ (0xFF4F, "M", "o"),
+ (0xFF50, "M", "p"),
+ (0xFF51, "M", "q"),
+ (0xFF52, "M", "r"),
+ (0xFF53, "M", "s"),
+ (0xFF54, "M", "t"),
+ (0xFF55, "M", "u"),
+ (0xFF56, "M", "v"),
+ (0xFF57, "M", "w"),
+ (0xFF58, "M", "x"),
+ (0xFF59, "M", "y"),
+ (0xFF5A, "M", "z"),
+ (0xFF5B, "3", "{"),
+ (0xFF5C, "3", "|"),
+ (0xFF5D, "3", "}"),
+ (0xFF5E, "3", "~"),
+ (0xFF5F, "M", "⦅"),
+ (0xFF60, "M", "⦆"),
+ (0xFF61, "M", "."),
+ (0xFF62, "M", "「"),
+ (0xFF63, "M", "」"),
+ (0xFF64, "M", "、"),
+ (0xFF65, "M", "・"),
+ (0xFF66, "M", "ヲ"),
+ (0xFF67, "M", "ァ"),
+ (0xFF68, "M", "ィ"),
+ (0xFF69, "M", "ゥ"),
+ (0xFF6A, "M", "ェ"),
+ (0xFF6B, "M", "ォ"),
+ (0xFF6C, "M", "ャ"),
+ (0xFF6D, "M", "ュ"),
+ (0xFF6E, "M", "ョ"),
+ (0xFF6F, "M", "ッ"),
+ (0xFF70, "M", "ー"),
+ (0xFF71, "M", "ア"),
+ (0xFF72, "M", "イ"),
+ (0xFF73, "M", "ウ"),
+ (0xFF74, "M", "エ"),
+ (0xFF75, "M", "オ"),
+ (0xFF76, "M", "カ"),
+ (0xFF77, "M", "キ"),
+ (0xFF78, "M", "ク"),
+ (0xFF79, "M", "ケ"),
+ (0xFF7A, "M", "コ"),
+ (0xFF7B, "M", "サ"),
+ (0xFF7C, "M", "シ"),
+ (0xFF7D, "M", "ス"),
+ (0xFF7E, "M", "セ"),
+ (0xFF7F, "M", "ソ"),
+ (0xFF80, "M", "タ"),
+ (0xFF81, "M", "チ"),
+ (0xFF82, "M", "ツ"),
+ ]
+
+
+def _seg_52() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFF83, "M", "テ"),
+ (0xFF84, "M", "ト"),
+ (0xFF85, "M", "ナ"),
+ (0xFF86, "M", "ニ"),
+ (0xFF87, "M", "ヌ"),
+ (0xFF88, "M", "ネ"),
+ (0xFF89, "M", "ノ"),
+ (0xFF8A, "M", "ハ"),
+ (0xFF8B, "M", "ヒ"),
+ (0xFF8C, "M", "フ"),
+ (0xFF8D, "M", "ヘ"),
+ (0xFF8E, "M", "ホ"),
+ (0xFF8F, "M", "マ"),
+ (0xFF90, "M", "ミ"),
+ (0xFF91, "M", "ム"),
+ (0xFF92, "M", "メ"),
+ (0xFF93, "M", "モ"),
+ (0xFF94, "M", "ヤ"),
+ (0xFF95, "M", "ユ"),
+ (0xFF96, "M", "ヨ"),
+ (0xFF97, "M", "ラ"),
+ (0xFF98, "M", "リ"),
+ (0xFF99, "M", "ル"),
+ (0xFF9A, "M", "レ"),
+ (0xFF9B, "M", "ロ"),
+ (0xFF9C, "M", "ワ"),
+ (0xFF9D, "M", "ン"),
+ (0xFF9E, "M", "゙"),
+ (0xFF9F, "M", "゚"),
+ (0xFFA0, "X"),
+ (0xFFA1, "M", "ᄀ"),
+ (0xFFA2, "M", "ᄁ"),
+ (0xFFA3, "M", "ᆪ"),
+ (0xFFA4, "M", "ᄂ"),
+ (0xFFA5, "M", "ᆬ"),
+ (0xFFA6, "M", "ᆭ"),
+ (0xFFA7, "M", "ᄃ"),
+ (0xFFA8, "M", "ᄄ"),
+ (0xFFA9, "M", "ᄅ"),
+ (0xFFAA, "M", "ᆰ"),
+ (0xFFAB, "M", "ᆱ"),
+ (0xFFAC, "M", "ᆲ"),
+ (0xFFAD, "M", "ᆳ"),
+ (0xFFAE, "M", "ᆴ"),
+ (0xFFAF, "M", "ᆵ"),
+ (0xFFB0, "M", "ᄚ"),
+ (0xFFB1, "M", "ᄆ"),
+ (0xFFB2, "M", "ᄇ"),
+ (0xFFB3, "M", "ᄈ"),
+ (0xFFB4, "M", "ᄡ"),
+ (0xFFB5, "M", "ᄉ"),
+ (0xFFB6, "M", "ᄊ"),
+ (0xFFB7, "M", "ᄋ"),
+ (0xFFB8, "M", "ᄌ"),
+ (0xFFB9, "M", "ᄍ"),
+ (0xFFBA, "M", "ᄎ"),
+ (0xFFBB, "M", "ᄏ"),
+ (0xFFBC, "M", "ᄐ"),
+ (0xFFBD, "M", "ᄑ"),
+ (0xFFBE, "M", "ᄒ"),
+ (0xFFBF, "X"),
+ (0xFFC2, "M", "ᅡ"),
+ (0xFFC3, "M", "ᅢ"),
+ (0xFFC4, "M", "ᅣ"),
+ (0xFFC5, "M", "ᅤ"),
+ (0xFFC6, "M", "ᅥ"),
+ (0xFFC7, "M", "ᅦ"),
+ (0xFFC8, "X"),
+ (0xFFCA, "M", "ᅧ"),
+ (0xFFCB, "M", "ᅨ"),
+ (0xFFCC, "M", "ᅩ"),
+ (0xFFCD, "M", "ᅪ"),
+ (0xFFCE, "M", "ᅫ"),
+ (0xFFCF, "M", "ᅬ"),
+ (0xFFD0, "X"),
+ (0xFFD2, "M", "ᅭ"),
+ (0xFFD3, "M", "ᅮ"),
+ (0xFFD4, "M", "ᅯ"),
+ (0xFFD5, "M", "ᅰ"),
+ (0xFFD6, "M", "ᅱ"),
+ (0xFFD7, "M", "ᅲ"),
+ (0xFFD8, "X"),
+ (0xFFDA, "M", "ᅳ"),
+ (0xFFDB, "M", "ᅴ"),
+ (0xFFDC, "M", "ᅵ"),
+ (0xFFDD, "X"),
+ (0xFFE0, "M", "¢"),
+ (0xFFE1, "M", "£"),
+ (0xFFE2, "M", "¬"),
+ (0xFFE3, "3", " ̄"),
+ (0xFFE4, "M", "¦"),
+ (0xFFE5, "M", "¥"),
+ (0xFFE6, "M", "₩"),
+ (0xFFE7, "X"),
+ (0xFFE8, "M", "│"),
+ (0xFFE9, "M", "←"),
+ (0xFFEA, "M", "↑"),
+ (0xFFEB, "M", "→"),
+ (0xFFEC, "M", "↓"),
+ (0xFFED, "M", "■"),
+ ]
+
+
+def _seg_53() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0xFFEE, "M", "○"),
+ (0xFFEF, "X"),
+ (0x10000, "V"),
+ (0x1000C, "X"),
+ (0x1000D, "V"),
+ (0x10027, "X"),
+ (0x10028, "V"),
+ (0x1003B, "X"),
+ (0x1003C, "V"),
+ (0x1003E, "X"),
+ (0x1003F, "V"),
+ (0x1004E, "X"),
+ (0x10050, "V"),
+ (0x1005E, "X"),
+ (0x10080, "V"),
+ (0x100FB, "X"),
+ (0x10100, "V"),
+ (0x10103, "X"),
+ (0x10107, "V"),
+ (0x10134, "X"),
+ (0x10137, "V"),
+ (0x1018F, "X"),
+ (0x10190, "V"),
+ (0x1019D, "X"),
+ (0x101A0, "V"),
+ (0x101A1, "X"),
+ (0x101D0, "V"),
+ (0x101FE, "X"),
+ (0x10280, "V"),
+ (0x1029D, "X"),
+ (0x102A0, "V"),
+ (0x102D1, "X"),
+ (0x102E0, "V"),
+ (0x102FC, "X"),
+ (0x10300, "V"),
+ (0x10324, "X"),
+ (0x1032D, "V"),
+ (0x1034B, "X"),
+ (0x10350, "V"),
+ (0x1037B, "X"),
+ (0x10380, "V"),
+ (0x1039E, "X"),
+ (0x1039F, "V"),
+ (0x103C4, "X"),
+ (0x103C8, "V"),
+ (0x103D6, "X"),
+ (0x10400, "M", "𐐨"),
+ (0x10401, "M", "𐐩"),
+ (0x10402, "M", "𐐪"),
+ (0x10403, "M", "𐐫"),
+ (0x10404, "M", "𐐬"),
+ (0x10405, "M", "𐐭"),
+ (0x10406, "M", "𐐮"),
+ (0x10407, "M", "𐐯"),
+ (0x10408, "M", "𐐰"),
+ (0x10409, "M", "𐐱"),
+ (0x1040A, "M", "𐐲"),
+ (0x1040B, "M", "𐐳"),
+ (0x1040C, "M", "𐐴"),
+ (0x1040D, "M", "𐐵"),
+ (0x1040E, "M", "𐐶"),
+ (0x1040F, "M", "𐐷"),
+ (0x10410, "M", "𐐸"),
+ (0x10411, "M", "𐐹"),
+ (0x10412, "M", "𐐺"),
+ (0x10413, "M", "𐐻"),
+ (0x10414, "M", "𐐼"),
+ (0x10415, "M", "𐐽"),
+ (0x10416, "M", "𐐾"),
+ (0x10417, "M", "𐐿"),
+ (0x10418, "M", "𐑀"),
+ (0x10419, "M", "𐑁"),
+ (0x1041A, "M", "𐑂"),
+ (0x1041B, "M", "𐑃"),
+ (0x1041C, "M", "𐑄"),
+ (0x1041D, "M", "𐑅"),
+ (0x1041E, "M", "𐑆"),
+ (0x1041F, "M", "𐑇"),
+ (0x10420, "M", "𐑈"),
+ (0x10421, "M", "𐑉"),
+ (0x10422, "M", "𐑊"),
+ (0x10423, "M", "𐑋"),
+ (0x10424, "M", "𐑌"),
+ (0x10425, "M", "𐑍"),
+ (0x10426, "M", "𐑎"),
+ (0x10427, "M", "𐑏"),
+ (0x10428, "V"),
+ (0x1049E, "X"),
+ (0x104A0, "V"),
+ (0x104AA, "X"),
+ (0x104B0, "M", "𐓘"),
+ (0x104B1, "M", "𐓙"),
+ (0x104B2, "M", "𐓚"),
+ (0x104B3, "M", "𐓛"),
+ (0x104B4, "M", "𐓜"),
+ (0x104B5, "M", "𐓝"),
+ (0x104B6, "M", "𐓞"),
+ (0x104B7, "M", "𐓟"),
+ (0x104B8, "M", "𐓠"),
+ (0x104B9, "M", "𐓡"),
+ ]
+
+
+def _seg_54() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x104BA, "M", "𐓢"),
+ (0x104BB, "M", "𐓣"),
+ (0x104BC, "M", "𐓤"),
+ (0x104BD, "M", "𐓥"),
+ (0x104BE, "M", "𐓦"),
+ (0x104BF, "M", "𐓧"),
+ (0x104C0, "M", "𐓨"),
+ (0x104C1, "M", "𐓩"),
+ (0x104C2, "M", "𐓪"),
+ (0x104C3, "M", "𐓫"),
+ (0x104C4, "M", "𐓬"),
+ (0x104C5, "M", "𐓭"),
+ (0x104C6, "M", "𐓮"),
+ (0x104C7, "M", "𐓯"),
+ (0x104C8, "M", "𐓰"),
+ (0x104C9, "M", "𐓱"),
+ (0x104CA, "M", "𐓲"),
+ (0x104CB, "M", "𐓳"),
+ (0x104CC, "M", "𐓴"),
+ (0x104CD, "M", "𐓵"),
+ (0x104CE, "M", "𐓶"),
+ (0x104CF, "M", "𐓷"),
+ (0x104D0, "M", "𐓸"),
+ (0x104D1, "M", "𐓹"),
+ (0x104D2, "M", "𐓺"),
+ (0x104D3, "M", "𐓻"),
+ (0x104D4, "X"),
+ (0x104D8, "V"),
+ (0x104FC, "X"),
+ (0x10500, "V"),
+ (0x10528, "X"),
+ (0x10530, "V"),
+ (0x10564, "X"),
+ (0x1056F, "V"),
+ (0x10570, "M", "𐖗"),
+ (0x10571, "M", "𐖘"),
+ (0x10572, "M", "𐖙"),
+ (0x10573, "M", "𐖚"),
+ (0x10574, "M", "𐖛"),
+ (0x10575, "M", "𐖜"),
+ (0x10576, "M", "𐖝"),
+ (0x10577, "M", "𐖞"),
+ (0x10578, "M", "𐖟"),
+ (0x10579, "M", "𐖠"),
+ (0x1057A, "M", "𐖡"),
+ (0x1057B, "X"),
+ (0x1057C, "M", "𐖣"),
+ (0x1057D, "M", "𐖤"),
+ (0x1057E, "M", "𐖥"),
+ (0x1057F, "M", "𐖦"),
+ (0x10580, "M", "𐖧"),
+ (0x10581, "M", "𐖨"),
+ (0x10582, "M", "𐖩"),
+ (0x10583, "M", "𐖪"),
+ (0x10584, "M", "𐖫"),
+ (0x10585, "M", "𐖬"),
+ (0x10586, "M", "𐖭"),
+ (0x10587, "M", "𐖮"),
+ (0x10588, "M", "𐖯"),
+ (0x10589, "M", "𐖰"),
+ (0x1058A, "M", "𐖱"),
+ (0x1058B, "X"),
+ (0x1058C, "M", "𐖳"),
+ (0x1058D, "M", "𐖴"),
+ (0x1058E, "M", "𐖵"),
+ (0x1058F, "M", "𐖶"),
+ (0x10590, "M", "𐖷"),
+ (0x10591, "M", "𐖸"),
+ (0x10592, "M", "𐖹"),
+ (0x10593, "X"),
+ (0x10594, "M", "𐖻"),
+ (0x10595, "M", "𐖼"),
+ (0x10596, "X"),
+ (0x10597, "V"),
+ (0x105A2, "X"),
+ (0x105A3, "V"),
+ (0x105B2, "X"),
+ (0x105B3, "V"),
+ (0x105BA, "X"),
+ (0x105BB, "V"),
+ (0x105BD, "X"),
+ (0x10600, "V"),
+ (0x10737, "X"),
+ (0x10740, "V"),
+ (0x10756, "X"),
+ (0x10760, "V"),
+ (0x10768, "X"),
+ (0x10780, "V"),
+ (0x10781, "M", "ː"),
+ (0x10782, "M", "ˑ"),
+ (0x10783, "M", "æ"),
+ (0x10784, "M", "ʙ"),
+ (0x10785, "M", "ɓ"),
+ (0x10786, "X"),
+ (0x10787, "M", "ʣ"),
+ (0x10788, "M", "ꭦ"),
+ (0x10789, "M", "ʥ"),
+ (0x1078A, "M", "ʤ"),
+ (0x1078B, "M", "ɖ"),
+ (0x1078C, "M", "ɗ"),
+ ]
+
+
+def _seg_55() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1078D, "M", "ᶑ"),
+ (0x1078E, "M", "ɘ"),
+ (0x1078F, "M", "ɞ"),
+ (0x10790, "M", "ʩ"),
+ (0x10791, "M", "ɤ"),
+ (0x10792, "M", "ɢ"),
+ (0x10793, "M", "ɠ"),
+ (0x10794, "M", "ʛ"),
+ (0x10795, "M", "ħ"),
+ (0x10796, "M", "ʜ"),
+ (0x10797, "M", "ɧ"),
+ (0x10798, "M", "ʄ"),
+ (0x10799, "M", "ʪ"),
+ (0x1079A, "M", "ʫ"),
+ (0x1079B, "M", "ɬ"),
+ (0x1079C, "M", "𝼄"),
+ (0x1079D, "M", "ꞎ"),
+ (0x1079E, "M", "ɮ"),
+ (0x1079F, "M", "𝼅"),
+ (0x107A0, "M", "ʎ"),
+ (0x107A1, "M", "𝼆"),
+ (0x107A2, "M", "ø"),
+ (0x107A3, "M", "ɶ"),
+ (0x107A4, "M", "ɷ"),
+ (0x107A5, "M", "q"),
+ (0x107A6, "M", "ɺ"),
+ (0x107A7, "M", "𝼈"),
+ (0x107A8, "M", "ɽ"),
+ (0x107A9, "M", "ɾ"),
+ (0x107AA, "M", "ʀ"),
+ (0x107AB, "M", "ʨ"),
+ (0x107AC, "M", "ʦ"),
+ (0x107AD, "M", "ꭧ"),
+ (0x107AE, "M", "ʧ"),
+ (0x107AF, "M", "ʈ"),
+ (0x107B0, "M", "ⱱ"),
+ (0x107B1, "X"),
+ (0x107B2, "M", "ʏ"),
+ (0x107B3, "M", "ʡ"),
+ (0x107B4, "M", "ʢ"),
+ (0x107B5, "M", "ʘ"),
+ (0x107B6, "M", "ǀ"),
+ (0x107B7, "M", "ǁ"),
+ (0x107B8, "M", "ǂ"),
+ (0x107B9, "M", "𝼊"),
+ (0x107BA, "M", "𝼞"),
+ (0x107BB, "X"),
+ (0x10800, "V"),
+ (0x10806, "X"),
+ (0x10808, "V"),
+ (0x10809, "X"),
+ (0x1080A, "V"),
+ (0x10836, "X"),
+ (0x10837, "V"),
+ (0x10839, "X"),
+ (0x1083C, "V"),
+ (0x1083D, "X"),
+ (0x1083F, "V"),
+ (0x10856, "X"),
+ (0x10857, "V"),
+ (0x1089F, "X"),
+ (0x108A7, "V"),
+ (0x108B0, "X"),
+ (0x108E0, "V"),
+ (0x108F3, "X"),
+ (0x108F4, "V"),
+ (0x108F6, "X"),
+ (0x108FB, "V"),
+ (0x1091C, "X"),
+ (0x1091F, "V"),
+ (0x1093A, "X"),
+ (0x1093F, "V"),
+ (0x10940, "X"),
+ (0x10980, "V"),
+ (0x109B8, "X"),
+ (0x109BC, "V"),
+ (0x109D0, "X"),
+ (0x109D2, "V"),
+ (0x10A04, "X"),
+ (0x10A05, "V"),
+ (0x10A07, "X"),
+ (0x10A0C, "V"),
+ (0x10A14, "X"),
+ (0x10A15, "V"),
+ (0x10A18, "X"),
+ (0x10A19, "V"),
+ (0x10A36, "X"),
+ (0x10A38, "V"),
+ (0x10A3B, "X"),
+ (0x10A3F, "V"),
+ (0x10A49, "X"),
+ (0x10A50, "V"),
+ (0x10A59, "X"),
+ (0x10A60, "V"),
+ (0x10AA0, "X"),
+ (0x10AC0, "V"),
+ (0x10AE7, "X"),
+ (0x10AEB, "V"),
+ (0x10AF7, "X"),
+ (0x10B00, "V"),
+ ]
+
+
+def _seg_56() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x10B36, "X"),
+ (0x10B39, "V"),
+ (0x10B56, "X"),
+ (0x10B58, "V"),
+ (0x10B73, "X"),
+ (0x10B78, "V"),
+ (0x10B92, "X"),
+ (0x10B99, "V"),
+ (0x10B9D, "X"),
+ (0x10BA9, "V"),
+ (0x10BB0, "X"),
+ (0x10C00, "V"),
+ (0x10C49, "X"),
+ (0x10C80, "M", "𐳀"),
+ (0x10C81, "M", "𐳁"),
+ (0x10C82, "M", "𐳂"),
+ (0x10C83, "M", "𐳃"),
+ (0x10C84, "M", "𐳄"),
+ (0x10C85, "M", "𐳅"),
+ (0x10C86, "M", "𐳆"),
+ (0x10C87, "M", "𐳇"),
+ (0x10C88, "M", "𐳈"),
+ (0x10C89, "M", "𐳉"),
+ (0x10C8A, "M", "𐳊"),
+ (0x10C8B, "M", "𐳋"),
+ (0x10C8C, "M", "𐳌"),
+ (0x10C8D, "M", "𐳍"),
+ (0x10C8E, "M", "𐳎"),
+ (0x10C8F, "M", "𐳏"),
+ (0x10C90, "M", "𐳐"),
+ (0x10C91, "M", "𐳑"),
+ (0x10C92, "M", "𐳒"),
+ (0x10C93, "M", "𐳓"),
+ (0x10C94, "M", "𐳔"),
+ (0x10C95, "M", "𐳕"),
+ (0x10C96, "M", "𐳖"),
+ (0x10C97, "M", "𐳗"),
+ (0x10C98, "M", "𐳘"),
+ (0x10C99, "M", "𐳙"),
+ (0x10C9A, "M", "𐳚"),
+ (0x10C9B, "M", "𐳛"),
+ (0x10C9C, "M", "𐳜"),
+ (0x10C9D, "M", "𐳝"),
+ (0x10C9E, "M", "𐳞"),
+ (0x10C9F, "M", "𐳟"),
+ (0x10CA0, "M", "𐳠"),
+ (0x10CA1, "M", "𐳡"),
+ (0x10CA2, "M", "𐳢"),
+ (0x10CA3, "M", "𐳣"),
+ (0x10CA4, "M", "𐳤"),
+ (0x10CA5, "M", "𐳥"),
+ (0x10CA6, "M", "𐳦"),
+ (0x10CA7, "M", "𐳧"),
+ (0x10CA8, "M", "𐳨"),
+ (0x10CA9, "M", "𐳩"),
+ (0x10CAA, "M", "𐳪"),
+ (0x10CAB, "M", "𐳫"),
+ (0x10CAC, "M", "𐳬"),
+ (0x10CAD, "M", "𐳭"),
+ (0x10CAE, "M", "𐳮"),
+ (0x10CAF, "M", "𐳯"),
+ (0x10CB0, "M", "𐳰"),
+ (0x10CB1, "M", "𐳱"),
+ (0x10CB2, "M", "𐳲"),
+ (0x10CB3, "X"),
+ (0x10CC0, "V"),
+ (0x10CF3, "X"),
+ (0x10CFA, "V"),
+ (0x10D28, "X"),
+ (0x10D30, "V"),
+ (0x10D3A, "X"),
+ (0x10E60, "V"),
+ (0x10E7F, "X"),
+ (0x10E80, "V"),
+ (0x10EAA, "X"),
+ (0x10EAB, "V"),
+ (0x10EAE, "X"),
+ (0x10EB0, "V"),
+ (0x10EB2, "X"),
+ (0x10EFD, "V"),
+ (0x10F28, "X"),
+ (0x10F30, "V"),
+ (0x10F5A, "X"),
+ (0x10F70, "V"),
+ (0x10F8A, "X"),
+ (0x10FB0, "V"),
+ (0x10FCC, "X"),
+ (0x10FE0, "V"),
+ (0x10FF7, "X"),
+ (0x11000, "V"),
+ (0x1104E, "X"),
+ (0x11052, "V"),
+ (0x11076, "X"),
+ (0x1107F, "V"),
+ (0x110BD, "X"),
+ (0x110BE, "V"),
+ (0x110C3, "X"),
+ (0x110D0, "V"),
+ (0x110E9, "X"),
+ (0x110F0, "V"),
+ ]
+
+
+def _seg_57() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x110FA, "X"),
+ (0x11100, "V"),
+ (0x11135, "X"),
+ (0x11136, "V"),
+ (0x11148, "X"),
+ (0x11150, "V"),
+ (0x11177, "X"),
+ (0x11180, "V"),
+ (0x111E0, "X"),
+ (0x111E1, "V"),
+ (0x111F5, "X"),
+ (0x11200, "V"),
+ (0x11212, "X"),
+ (0x11213, "V"),
+ (0x11242, "X"),
+ (0x11280, "V"),
+ (0x11287, "X"),
+ (0x11288, "V"),
+ (0x11289, "X"),
+ (0x1128A, "V"),
+ (0x1128E, "X"),
+ (0x1128F, "V"),
+ (0x1129E, "X"),
+ (0x1129F, "V"),
+ (0x112AA, "X"),
+ (0x112B0, "V"),
+ (0x112EB, "X"),
+ (0x112F0, "V"),
+ (0x112FA, "X"),
+ (0x11300, "V"),
+ (0x11304, "X"),
+ (0x11305, "V"),
+ (0x1130D, "X"),
+ (0x1130F, "V"),
+ (0x11311, "X"),
+ (0x11313, "V"),
+ (0x11329, "X"),
+ (0x1132A, "V"),
+ (0x11331, "X"),
+ (0x11332, "V"),
+ (0x11334, "X"),
+ (0x11335, "V"),
+ (0x1133A, "X"),
+ (0x1133B, "V"),
+ (0x11345, "X"),
+ (0x11347, "V"),
+ (0x11349, "X"),
+ (0x1134B, "V"),
+ (0x1134E, "X"),
+ (0x11350, "V"),
+ (0x11351, "X"),
+ (0x11357, "V"),
+ (0x11358, "X"),
+ (0x1135D, "V"),
+ (0x11364, "X"),
+ (0x11366, "V"),
+ (0x1136D, "X"),
+ (0x11370, "V"),
+ (0x11375, "X"),
+ (0x11400, "V"),
+ (0x1145C, "X"),
+ (0x1145D, "V"),
+ (0x11462, "X"),
+ (0x11480, "V"),
+ (0x114C8, "X"),
+ (0x114D0, "V"),
+ (0x114DA, "X"),
+ (0x11580, "V"),
+ (0x115B6, "X"),
+ (0x115B8, "V"),
+ (0x115DE, "X"),
+ (0x11600, "V"),
+ (0x11645, "X"),
+ (0x11650, "V"),
+ (0x1165A, "X"),
+ (0x11660, "V"),
+ (0x1166D, "X"),
+ (0x11680, "V"),
+ (0x116BA, "X"),
+ (0x116C0, "V"),
+ (0x116CA, "X"),
+ (0x11700, "V"),
+ (0x1171B, "X"),
+ (0x1171D, "V"),
+ (0x1172C, "X"),
+ (0x11730, "V"),
+ (0x11747, "X"),
+ (0x11800, "V"),
+ (0x1183C, "X"),
+ (0x118A0, "M", "𑣀"),
+ (0x118A1, "M", "𑣁"),
+ (0x118A2, "M", "𑣂"),
+ (0x118A3, "M", "𑣃"),
+ (0x118A4, "M", "𑣄"),
+ (0x118A5, "M", "𑣅"),
+ (0x118A6, "M", "𑣆"),
+ (0x118A7, "M", "𑣇"),
+ (0x118A8, "M", "𑣈"),
+ (0x118A9, "M", "𑣉"),
+ (0x118AA, "M", "𑣊"),
+ ]
+
+
+def _seg_58() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x118AB, "M", "𑣋"),
+ (0x118AC, "M", "𑣌"),
+ (0x118AD, "M", "𑣍"),
+ (0x118AE, "M", "𑣎"),
+ (0x118AF, "M", "𑣏"),
+ (0x118B0, "M", "𑣐"),
+ (0x118B1, "M", "𑣑"),
+ (0x118B2, "M", "𑣒"),
+ (0x118B3, "M", "𑣓"),
+ (0x118B4, "M", "𑣔"),
+ (0x118B5, "M", "𑣕"),
+ (0x118B6, "M", "𑣖"),
+ (0x118B7, "M", "𑣗"),
+ (0x118B8, "M", "𑣘"),
+ (0x118B9, "M", "𑣙"),
+ (0x118BA, "M", "𑣚"),
+ (0x118BB, "M", "𑣛"),
+ (0x118BC, "M", "𑣜"),
+ (0x118BD, "M", "𑣝"),
+ (0x118BE, "M", "𑣞"),
+ (0x118BF, "M", "𑣟"),
+ (0x118C0, "V"),
+ (0x118F3, "X"),
+ (0x118FF, "V"),
+ (0x11907, "X"),
+ (0x11909, "V"),
+ (0x1190A, "X"),
+ (0x1190C, "V"),
+ (0x11914, "X"),
+ (0x11915, "V"),
+ (0x11917, "X"),
+ (0x11918, "V"),
+ (0x11936, "X"),
+ (0x11937, "V"),
+ (0x11939, "X"),
+ (0x1193B, "V"),
+ (0x11947, "X"),
+ (0x11950, "V"),
+ (0x1195A, "X"),
+ (0x119A0, "V"),
+ (0x119A8, "X"),
+ (0x119AA, "V"),
+ (0x119D8, "X"),
+ (0x119DA, "V"),
+ (0x119E5, "X"),
+ (0x11A00, "V"),
+ (0x11A48, "X"),
+ (0x11A50, "V"),
+ (0x11AA3, "X"),
+ (0x11AB0, "V"),
+ (0x11AF9, "X"),
+ (0x11B00, "V"),
+ (0x11B0A, "X"),
+ (0x11C00, "V"),
+ (0x11C09, "X"),
+ (0x11C0A, "V"),
+ (0x11C37, "X"),
+ (0x11C38, "V"),
+ (0x11C46, "X"),
+ (0x11C50, "V"),
+ (0x11C6D, "X"),
+ (0x11C70, "V"),
+ (0x11C90, "X"),
+ (0x11C92, "V"),
+ (0x11CA8, "X"),
+ (0x11CA9, "V"),
+ (0x11CB7, "X"),
+ (0x11D00, "V"),
+ (0x11D07, "X"),
+ (0x11D08, "V"),
+ (0x11D0A, "X"),
+ (0x11D0B, "V"),
+ (0x11D37, "X"),
+ (0x11D3A, "V"),
+ (0x11D3B, "X"),
+ (0x11D3C, "V"),
+ (0x11D3E, "X"),
+ (0x11D3F, "V"),
+ (0x11D48, "X"),
+ (0x11D50, "V"),
+ (0x11D5A, "X"),
+ (0x11D60, "V"),
+ (0x11D66, "X"),
+ (0x11D67, "V"),
+ (0x11D69, "X"),
+ (0x11D6A, "V"),
+ (0x11D8F, "X"),
+ (0x11D90, "V"),
+ (0x11D92, "X"),
+ (0x11D93, "V"),
+ (0x11D99, "X"),
+ (0x11DA0, "V"),
+ (0x11DAA, "X"),
+ (0x11EE0, "V"),
+ (0x11EF9, "X"),
+ (0x11F00, "V"),
+ (0x11F11, "X"),
+ (0x11F12, "V"),
+ (0x11F3B, "X"),
+ (0x11F3E, "V"),
+ ]
+
+
+def _seg_59() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x11F5A, "X"),
+ (0x11FB0, "V"),
+ (0x11FB1, "X"),
+ (0x11FC0, "V"),
+ (0x11FF2, "X"),
+ (0x11FFF, "V"),
+ (0x1239A, "X"),
+ (0x12400, "V"),
+ (0x1246F, "X"),
+ (0x12470, "V"),
+ (0x12475, "X"),
+ (0x12480, "V"),
+ (0x12544, "X"),
+ (0x12F90, "V"),
+ (0x12FF3, "X"),
+ (0x13000, "V"),
+ (0x13430, "X"),
+ (0x13440, "V"),
+ (0x13456, "X"),
+ (0x14400, "V"),
+ (0x14647, "X"),
+ (0x16800, "V"),
+ (0x16A39, "X"),
+ (0x16A40, "V"),
+ (0x16A5F, "X"),
+ (0x16A60, "V"),
+ (0x16A6A, "X"),
+ (0x16A6E, "V"),
+ (0x16ABF, "X"),
+ (0x16AC0, "V"),
+ (0x16ACA, "X"),
+ (0x16AD0, "V"),
+ (0x16AEE, "X"),
+ (0x16AF0, "V"),
+ (0x16AF6, "X"),
+ (0x16B00, "V"),
+ (0x16B46, "X"),
+ (0x16B50, "V"),
+ (0x16B5A, "X"),
+ (0x16B5B, "V"),
+ (0x16B62, "X"),
+ (0x16B63, "V"),
+ (0x16B78, "X"),
+ (0x16B7D, "V"),
+ (0x16B90, "X"),
+ (0x16E40, "M", "𖹠"),
+ (0x16E41, "M", "𖹡"),
+ (0x16E42, "M", "𖹢"),
+ (0x16E43, "M", "𖹣"),
+ (0x16E44, "M", "𖹤"),
+ (0x16E45, "M", "𖹥"),
+ (0x16E46, "M", "𖹦"),
+ (0x16E47, "M", "𖹧"),
+ (0x16E48, "M", "𖹨"),
+ (0x16E49, "M", "𖹩"),
+ (0x16E4A, "M", "𖹪"),
+ (0x16E4B, "M", "𖹫"),
+ (0x16E4C, "M", "𖹬"),
+ (0x16E4D, "M", "𖹭"),
+ (0x16E4E, "M", "𖹮"),
+ (0x16E4F, "M", "𖹯"),
+ (0x16E50, "M", "𖹰"),
+ (0x16E51, "M", "𖹱"),
+ (0x16E52, "M", "𖹲"),
+ (0x16E53, "M", "𖹳"),
+ (0x16E54, "M", "𖹴"),
+ (0x16E55, "M", "𖹵"),
+ (0x16E56, "M", "𖹶"),
+ (0x16E57, "M", "𖹷"),
+ (0x16E58, "M", "𖹸"),
+ (0x16E59, "M", "𖹹"),
+ (0x16E5A, "M", "𖹺"),
+ (0x16E5B, "M", "𖹻"),
+ (0x16E5C, "M", "𖹼"),
+ (0x16E5D, "M", "𖹽"),
+ (0x16E5E, "M", "𖹾"),
+ (0x16E5F, "M", "𖹿"),
+ (0x16E60, "V"),
+ (0x16E9B, "X"),
+ (0x16F00, "V"),
+ (0x16F4B, "X"),
+ (0x16F4F, "V"),
+ (0x16F88, "X"),
+ (0x16F8F, "V"),
+ (0x16FA0, "X"),
+ (0x16FE0, "V"),
+ (0x16FE5, "X"),
+ (0x16FF0, "V"),
+ (0x16FF2, "X"),
+ (0x17000, "V"),
+ (0x187F8, "X"),
+ (0x18800, "V"),
+ (0x18CD6, "X"),
+ (0x18D00, "V"),
+ (0x18D09, "X"),
+ (0x1AFF0, "V"),
+ (0x1AFF4, "X"),
+ (0x1AFF5, "V"),
+ (0x1AFFC, "X"),
+ (0x1AFFD, "V"),
+ ]
+
+
+def _seg_60() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1AFFF, "X"),
+ (0x1B000, "V"),
+ (0x1B123, "X"),
+ (0x1B132, "V"),
+ (0x1B133, "X"),
+ (0x1B150, "V"),
+ (0x1B153, "X"),
+ (0x1B155, "V"),
+ (0x1B156, "X"),
+ (0x1B164, "V"),
+ (0x1B168, "X"),
+ (0x1B170, "V"),
+ (0x1B2FC, "X"),
+ (0x1BC00, "V"),
+ (0x1BC6B, "X"),
+ (0x1BC70, "V"),
+ (0x1BC7D, "X"),
+ (0x1BC80, "V"),
+ (0x1BC89, "X"),
+ (0x1BC90, "V"),
+ (0x1BC9A, "X"),
+ (0x1BC9C, "V"),
+ (0x1BCA0, "I"),
+ (0x1BCA4, "X"),
+ (0x1CF00, "V"),
+ (0x1CF2E, "X"),
+ (0x1CF30, "V"),
+ (0x1CF47, "X"),
+ (0x1CF50, "V"),
+ (0x1CFC4, "X"),
+ (0x1D000, "V"),
+ (0x1D0F6, "X"),
+ (0x1D100, "V"),
+ (0x1D127, "X"),
+ (0x1D129, "V"),
+ (0x1D15E, "M", "𝅗𝅥"),
+ (0x1D15F, "M", "𝅘𝅥"),
+ (0x1D160, "M", "𝅘𝅥𝅮"),
+ (0x1D161, "M", "𝅘𝅥𝅯"),
+ (0x1D162, "M", "𝅘𝅥𝅰"),
+ (0x1D163, "M", "𝅘𝅥𝅱"),
+ (0x1D164, "M", "𝅘𝅥𝅲"),
+ (0x1D165, "V"),
+ (0x1D173, "X"),
+ (0x1D17B, "V"),
+ (0x1D1BB, "M", "𝆹𝅥"),
+ (0x1D1BC, "M", "𝆺𝅥"),
+ (0x1D1BD, "M", "𝆹𝅥𝅮"),
+ (0x1D1BE, "M", "𝆺𝅥𝅮"),
+ (0x1D1BF, "M", "𝆹𝅥𝅯"),
+ (0x1D1C0, "M", "𝆺𝅥𝅯"),
+ (0x1D1C1, "V"),
+ (0x1D1EB, "X"),
+ (0x1D200, "V"),
+ (0x1D246, "X"),
+ (0x1D2C0, "V"),
+ (0x1D2D4, "X"),
+ (0x1D2E0, "V"),
+ (0x1D2F4, "X"),
+ (0x1D300, "V"),
+ (0x1D357, "X"),
+ (0x1D360, "V"),
+ (0x1D379, "X"),
+ (0x1D400, "M", "a"),
+ (0x1D401, "M", "b"),
+ (0x1D402, "M", "c"),
+ (0x1D403, "M", "d"),
+ (0x1D404, "M", "e"),
+ (0x1D405, "M", "f"),
+ (0x1D406, "M", "g"),
+ (0x1D407, "M", "h"),
+ (0x1D408, "M", "i"),
+ (0x1D409, "M", "j"),
+ (0x1D40A, "M", "k"),
+ (0x1D40B, "M", "l"),
+ (0x1D40C, "M", "m"),
+ (0x1D40D, "M", "n"),
+ (0x1D40E, "M", "o"),
+ (0x1D40F, "M", "p"),
+ (0x1D410, "M", "q"),
+ (0x1D411, "M", "r"),
+ (0x1D412, "M", "s"),
+ (0x1D413, "M", "t"),
+ (0x1D414, "M", "u"),
+ (0x1D415, "M", "v"),
+ (0x1D416, "M", "w"),
+ (0x1D417, "M", "x"),
+ (0x1D418, "M", "y"),
+ (0x1D419, "M", "z"),
+ (0x1D41A, "M", "a"),
+ (0x1D41B, "M", "b"),
+ (0x1D41C, "M", "c"),
+ (0x1D41D, "M", "d"),
+ (0x1D41E, "M", "e"),
+ (0x1D41F, "M", "f"),
+ (0x1D420, "M", "g"),
+ (0x1D421, "M", "h"),
+ (0x1D422, "M", "i"),
+ (0x1D423, "M", "j"),
+ (0x1D424, "M", "k"),
+ ]
+
+
+def _seg_61() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D425, "M", "l"),
+ (0x1D426, "M", "m"),
+ (0x1D427, "M", "n"),
+ (0x1D428, "M", "o"),
+ (0x1D429, "M", "p"),
+ (0x1D42A, "M", "q"),
+ (0x1D42B, "M", "r"),
+ (0x1D42C, "M", "s"),
+ (0x1D42D, "M", "t"),
+ (0x1D42E, "M", "u"),
+ (0x1D42F, "M", "v"),
+ (0x1D430, "M", "w"),
+ (0x1D431, "M", "x"),
+ (0x1D432, "M", "y"),
+ (0x1D433, "M", "z"),
+ (0x1D434, "M", "a"),
+ (0x1D435, "M", "b"),
+ (0x1D436, "M", "c"),
+ (0x1D437, "M", "d"),
+ (0x1D438, "M", "e"),
+ (0x1D439, "M", "f"),
+ (0x1D43A, "M", "g"),
+ (0x1D43B, "M", "h"),
+ (0x1D43C, "M", "i"),
+ (0x1D43D, "M", "j"),
+ (0x1D43E, "M", "k"),
+ (0x1D43F, "M", "l"),
+ (0x1D440, "M", "m"),
+ (0x1D441, "M", "n"),
+ (0x1D442, "M", "o"),
+ (0x1D443, "M", "p"),
+ (0x1D444, "M", "q"),
+ (0x1D445, "M", "r"),
+ (0x1D446, "M", "s"),
+ (0x1D447, "M", "t"),
+ (0x1D448, "M", "u"),
+ (0x1D449, "M", "v"),
+ (0x1D44A, "M", "w"),
+ (0x1D44B, "M", "x"),
+ (0x1D44C, "M", "y"),
+ (0x1D44D, "M", "z"),
+ (0x1D44E, "M", "a"),
+ (0x1D44F, "M", "b"),
+ (0x1D450, "M", "c"),
+ (0x1D451, "M", "d"),
+ (0x1D452, "M", "e"),
+ (0x1D453, "M", "f"),
+ (0x1D454, "M", "g"),
+ (0x1D455, "X"),
+ (0x1D456, "M", "i"),
+ (0x1D457, "M", "j"),
+ (0x1D458, "M", "k"),
+ (0x1D459, "M", "l"),
+ (0x1D45A, "M", "m"),
+ (0x1D45B, "M", "n"),
+ (0x1D45C, "M", "o"),
+ (0x1D45D, "M", "p"),
+ (0x1D45E, "M", "q"),
+ (0x1D45F, "M", "r"),
+ (0x1D460, "M", "s"),
+ (0x1D461, "M", "t"),
+ (0x1D462, "M", "u"),
+ (0x1D463, "M", "v"),
+ (0x1D464, "M", "w"),
+ (0x1D465, "M", "x"),
+ (0x1D466, "M", "y"),
+ (0x1D467, "M", "z"),
+ (0x1D468, "M", "a"),
+ (0x1D469, "M", "b"),
+ (0x1D46A, "M", "c"),
+ (0x1D46B, "M", "d"),
+ (0x1D46C, "M", "e"),
+ (0x1D46D, "M", "f"),
+ (0x1D46E, "M", "g"),
+ (0x1D46F, "M", "h"),
+ (0x1D470, "M", "i"),
+ (0x1D471, "M", "j"),
+ (0x1D472, "M", "k"),
+ (0x1D473, "M", "l"),
+ (0x1D474, "M", "m"),
+ (0x1D475, "M", "n"),
+ (0x1D476, "M", "o"),
+ (0x1D477, "M", "p"),
+ (0x1D478, "M", "q"),
+ (0x1D479, "M", "r"),
+ (0x1D47A, "M", "s"),
+ (0x1D47B, "M", "t"),
+ (0x1D47C, "M", "u"),
+ (0x1D47D, "M", "v"),
+ (0x1D47E, "M", "w"),
+ (0x1D47F, "M", "x"),
+ (0x1D480, "M", "y"),
+ (0x1D481, "M", "z"),
+ (0x1D482, "M", "a"),
+ (0x1D483, "M", "b"),
+ (0x1D484, "M", "c"),
+ (0x1D485, "M", "d"),
+ (0x1D486, "M", "e"),
+ (0x1D487, "M", "f"),
+ (0x1D488, "M", "g"),
+ ]
+
+
+def _seg_62() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D489, "M", "h"),
+ (0x1D48A, "M", "i"),
+ (0x1D48B, "M", "j"),
+ (0x1D48C, "M", "k"),
+ (0x1D48D, "M", "l"),
+ (0x1D48E, "M", "m"),
+ (0x1D48F, "M", "n"),
+ (0x1D490, "M", "o"),
+ (0x1D491, "M", "p"),
+ (0x1D492, "M", "q"),
+ (0x1D493, "M", "r"),
+ (0x1D494, "M", "s"),
+ (0x1D495, "M", "t"),
+ (0x1D496, "M", "u"),
+ (0x1D497, "M", "v"),
+ (0x1D498, "M", "w"),
+ (0x1D499, "M", "x"),
+ (0x1D49A, "M", "y"),
+ (0x1D49B, "M", "z"),
+ (0x1D49C, "M", "a"),
+ (0x1D49D, "X"),
+ (0x1D49E, "M", "c"),
+ (0x1D49F, "M", "d"),
+ (0x1D4A0, "X"),
+ (0x1D4A2, "M", "g"),
+ (0x1D4A3, "X"),
+ (0x1D4A5, "M", "j"),
+ (0x1D4A6, "M", "k"),
+ (0x1D4A7, "X"),
+ (0x1D4A9, "M", "n"),
+ (0x1D4AA, "M", "o"),
+ (0x1D4AB, "M", "p"),
+ (0x1D4AC, "M", "q"),
+ (0x1D4AD, "X"),
+ (0x1D4AE, "M", "s"),
+ (0x1D4AF, "M", "t"),
+ (0x1D4B0, "M", "u"),
+ (0x1D4B1, "M", "v"),
+ (0x1D4B2, "M", "w"),
+ (0x1D4B3, "M", "x"),
+ (0x1D4B4, "M", "y"),
+ (0x1D4B5, "M", "z"),
+ (0x1D4B6, "M", "a"),
+ (0x1D4B7, "M", "b"),
+ (0x1D4B8, "M", "c"),
+ (0x1D4B9, "M", "d"),
+ (0x1D4BA, "X"),
+ (0x1D4BB, "M", "f"),
+ (0x1D4BC, "X"),
+ (0x1D4BD, "M", "h"),
+ (0x1D4BE, "M", "i"),
+ (0x1D4BF, "M", "j"),
+ (0x1D4C0, "M", "k"),
+ (0x1D4C1, "M", "l"),
+ (0x1D4C2, "M", "m"),
+ (0x1D4C3, "M", "n"),
+ (0x1D4C4, "X"),
+ (0x1D4C5, "M", "p"),
+ (0x1D4C6, "M", "q"),
+ (0x1D4C7, "M", "r"),
+ (0x1D4C8, "M", "s"),
+ (0x1D4C9, "M", "t"),
+ (0x1D4CA, "M", "u"),
+ (0x1D4CB, "M", "v"),
+ (0x1D4CC, "M", "w"),
+ (0x1D4CD, "M", "x"),
+ (0x1D4CE, "M", "y"),
+ (0x1D4CF, "M", "z"),
+ (0x1D4D0, "M", "a"),
+ (0x1D4D1, "M", "b"),
+ (0x1D4D2, "M", "c"),
+ (0x1D4D3, "M", "d"),
+ (0x1D4D4, "M", "e"),
+ (0x1D4D5, "M", "f"),
+ (0x1D4D6, "M", "g"),
+ (0x1D4D7, "M", "h"),
+ (0x1D4D8, "M", "i"),
+ (0x1D4D9, "M", "j"),
+ (0x1D4DA, "M", "k"),
+ (0x1D4DB, "M", "l"),
+ (0x1D4DC, "M", "m"),
+ (0x1D4DD, "M", "n"),
+ (0x1D4DE, "M", "o"),
+ (0x1D4DF, "M", "p"),
+ (0x1D4E0, "M", "q"),
+ (0x1D4E1, "M", "r"),
+ (0x1D4E2, "M", "s"),
+ (0x1D4E3, "M", "t"),
+ (0x1D4E4, "M", "u"),
+ (0x1D4E5, "M", "v"),
+ (0x1D4E6, "M", "w"),
+ (0x1D4E7, "M", "x"),
+ (0x1D4E8, "M", "y"),
+ (0x1D4E9, "M", "z"),
+ (0x1D4EA, "M", "a"),
+ (0x1D4EB, "M", "b"),
+ (0x1D4EC, "M", "c"),
+ (0x1D4ED, "M", "d"),
+ (0x1D4EE, "M", "e"),
+ (0x1D4EF, "M", "f"),
+ ]
+
+
+def _seg_63() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D4F0, "M", "g"),
+ (0x1D4F1, "M", "h"),
+ (0x1D4F2, "M", "i"),
+ (0x1D4F3, "M", "j"),
+ (0x1D4F4, "M", "k"),
+ (0x1D4F5, "M", "l"),
+ (0x1D4F6, "M", "m"),
+ (0x1D4F7, "M", "n"),
+ (0x1D4F8, "M", "o"),
+ (0x1D4F9, "M", "p"),
+ (0x1D4FA, "M", "q"),
+ (0x1D4FB, "M", "r"),
+ (0x1D4FC, "M", "s"),
+ (0x1D4FD, "M", "t"),
+ (0x1D4FE, "M", "u"),
+ (0x1D4FF, "M", "v"),
+ (0x1D500, "M", "w"),
+ (0x1D501, "M", "x"),
+ (0x1D502, "M", "y"),
+ (0x1D503, "M", "z"),
+ (0x1D504, "M", "a"),
+ (0x1D505, "M", "b"),
+ (0x1D506, "X"),
+ (0x1D507, "M", "d"),
+ (0x1D508, "M", "e"),
+ (0x1D509, "M", "f"),
+ (0x1D50A, "M", "g"),
+ (0x1D50B, "X"),
+ (0x1D50D, "M", "j"),
+ (0x1D50E, "M", "k"),
+ (0x1D50F, "M", "l"),
+ (0x1D510, "M", "m"),
+ (0x1D511, "M", "n"),
+ (0x1D512, "M", "o"),
+ (0x1D513, "M", "p"),
+ (0x1D514, "M", "q"),
+ (0x1D515, "X"),
+ (0x1D516, "M", "s"),
+ (0x1D517, "M", "t"),
+ (0x1D518, "M", "u"),
+ (0x1D519, "M", "v"),
+ (0x1D51A, "M", "w"),
+ (0x1D51B, "M", "x"),
+ (0x1D51C, "M", "y"),
+ (0x1D51D, "X"),
+ (0x1D51E, "M", "a"),
+ (0x1D51F, "M", "b"),
+ (0x1D520, "M", "c"),
+ (0x1D521, "M", "d"),
+ (0x1D522, "M", "e"),
+ (0x1D523, "M", "f"),
+ (0x1D524, "M", "g"),
+ (0x1D525, "M", "h"),
+ (0x1D526, "M", "i"),
+ (0x1D527, "M", "j"),
+ (0x1D528, "M", "k"),
+ (0x1D529, "M", "l"),
+ (0x1D52A, "M", "m"),
+ (0x1D52B, "M", "n"),
+ (0x1D52C, "M", "o"),
+ (0x1D52D, "M", "p"),
+ (0x1D52E, "M", "q"),
+ (0x1D52F, "M", "r"),
+ (0x1D530, "M", "s"),
+ (0x1D531, "M", "t"),
+ (0x1D532, "M", "u"),
+ (0x1D533, "M", "v"),
+ (0x1D534, "M", "w"),
+ (0x1D535, "M", "x"),
+ (0x1D536, "M", "y"),
+ (0x1D537, "M", "z"),
+ (0x1D538, "M", "a"),
+ (0x1D539, "M", "b"),
+ (0x1D53A, "X"),
+ (0x1D53B, "M", "d"),
+ (0x1D53C, "M", "e"),
+ (0x1D53D, "M", "f"),
+ (0x1D53E, "M", "g"),
+ (0x1D53F, "X"),
+ (0x1D540, "M", "i"),
+ (0x1D541, "M", "j"),
+ (0x1D542, "M", "k"),
+ (0x1D543, "M", "l"),
+ (0x1D544, "M", "m"),
+ (0x1D545, "X"),
+ (0x1D546, "M", "o"),
+ (0x1D547, "X"),
+ (0x1D54A, "M", "s"),
+ (0x1D54B, "M", "t"),
+ (0x1D54C, "M", "u"),
+ (0x1D54D, "M", "v"),
+ (0x1D54E, "M", "w"),
+ (0x1D54F, "M", "x"),
+ (0x1D550, "M", "y"),
+ (0x1D551, "X"),
+ (0x1D552, "M", "a"),
+ (0x1D553, "M", "b"),
+ (0x1D554, "M", "c"),
+ (0x1D555, "M", "d"),
+ (0x1D556, "M", "e"),
+ ]
+
+
+def _seg_64() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D557, "M", "f"),
+ (0x1D558, "M", "g"),
+ (0x1D559, "M", "h"),
+ (0x1D55A, "M", "i"),
+ (0x1D55B, "M", "j"),
+ (0x1D55C, "M", "k"),
+ (0x1D55D, "M", "l"),
+ (0x1D55E, "M", "m"),
+ (0x1D55F, "M", "n"),
+ (0x1D560, "M", "o"),
+ (0x1D561, "M", "p"),
+ (0x1D562, "M", "q"),
+ (0x1D563, "M", "r"),
+ (0x1D564, "M", "s"),
+ (0x1D565, "M", "t"),
+ (0x1D566, "M", "u"),
+ (0x1D567, "M", "v"),
+ (0x1D568, "M", "w"),
+ (0x1D569, "M", "x"),
+ (0x1D56A, "M", "y"),
+ (0x1D56B, "M", "z"),
+ (0x1D56C, "M", "a"),
+ (0x1D56D, "M", "b"),
+ (0x1D56E, "M", "c"),
+ (0x1D56F, "M", "d"),
+ (0x1D570, "M", "e"),
+ (0x1D571, "M", "f"),
+ (0x1D572, "M", "g"),
+ (0x1D573, "M", "h"),
+ (0x1D574, "M", "i"),
+ (0x1D575, "M", "j"),
+ (0x1D576, "M", "k"),
+ (0x1D577, "M", "l"),
+ (0x1D578, "M", "m"),
+ (0x1D579, "M", "n"),
+ (0x1D57A, "M", "o"),
+ (0x1D57B, "M", "p"),
+ (0x1D57C, "M", "q"),
+ (0x1D57D, "M", "r"),
+ (0x1D57E, "M", "s"),
+ (0x1D57F, "M", "t"),
+ (0x1D580, "M", "u"),
+ (0x1D581, "M", "v"),
+ (0x1D582, "M", "w"),
+ (0x1D583, "M", "x"),
+ (0x1D584, "M", "y"),
+ (0x1D585, "M", "z"),
+ (0x1D586, "M", "a"),
+ (0x1D587, "M", "b"),
+ (0x1D588, "M", "c"),
+ (0x1D589, "M", "d"),
+ (0x1D58A, "M", "e"),
+ (0x1D58B, "M", "f"),
+ (0x1D58C, "M", "g"),
+ (0x1D58D, "M", "h"),
+ (0x1D58E, "M", "i"),
+ (0x1D58F, "M", "j"),
+ (0x1D590, "M", "k"),
+ (0x1D591, "M", "l"),
+ (0x1D592, "M", "m"),
+ (0x1D593, "M", "n"),
+ (0x1D594, "M", "o"),
+ (0x1D595, "M", "p"),
+ (0x1D596, "M", "q"),
+ (0x1D597, "M", "r"),
+ (0x1D598, "M", "s"),
+ (0x1D599, "M", "t"),
+ (0x1D59A, "M", "u"),
+ (0x1D59B, "M", "v"),
+ (0x1D59C, "M", "w"),
+ (0x1D59D, "M", "x"),
+ (0x1D59E, "M", "y"),
+ (0x1D59F, "M", "z"),
+ (0x1D5A0, "M", "a"),
+ (0x1D5A1, "M", "b"),
+ (0x1D5A2, "M", "c"),
+ (0x1D5A3, "M", "d"),
+ (0x1D5A4, "M", "e"),
+ (0x1D5A5, "M", "f"),
+ (0x1D5A6, "M", "g"),
+ (0x1D5A7, "M", "h"),
+ (0x1D5A8, "M", "i"),
+ (0x1D5A9, "M", "j"),
+ (0x1D5AA, "M", "k"),
+ (0x1D5AB, "M", "l"),
+ (0x1D5AC, "M", "m"),
+ (0x1D5AD, "M", "n"),
+ (0x1D5AE, "M", "o"),
+ (0x1D5AF, "M", "p"),
+ (0x1D5B0, "M", "q"),
+ (0x1D5B1, "M", "r"),
+ (0x1D5B2, "M", "s"),
+ (0x1D5B3, "M", "t"),
+ (0x1D5B4, "M", "u"),
+ (0x1D5B5, "M", "v"),
+ (0x1D5B6, "M", "w"),
+ (0x1D5B7, "M", "x"),
+ (0x1D5B8, "M", "y"),
+ (0x1D5B9, "M", "z"),
+ (0x1D5BA, "M", "a"),
+ ]
+
+
+def _seg_65() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D5BB, "M", "b"),
+ (0x1D5BC, "M", "c"),
+ (0x1D5BD, "M", "d"),
+ (0x1D5BE, "M", "e"),
+ (0x1D5BF, "M", "f"),
+ (0x1D5C0, "M", "g"),
+ (0x1D5C1, "M", "h"),
+ (0x1D5C2, "M", "i"),
+ (0x1D5C3, "M", "j"),
+ (0x1D5C4, "M", "k"),
+ (0x1D5C5, "M", "l"),
+ (0x1D5C6, "M", "m"),
+ (0x1D5C7, "M", "n"),
+ (0x1D5C8, "M", "o"),
+ (0x1D5C9, "M", "p"),
+ (0x1D5CA, "M", "q"),
+ (0x1D5CB, "M", "r"),
+ (0x1D5CC, "M", "s"),
+ (0x1D5CD, "M", "t"),
+ (0x1D5CE, "M", "u"),
+ (0x1D5CF, "M", "v"),
+ (0x1D5D0, "M", "w"),
+ (0x1D5D1, "M", "x"),
+ (0x1D5D2, "M", "y"),
+ (0x1D5D3, "M", "z"),
+ (0x1D5D4, "M", "a"),
+ (0x1D5D5, "M", "b"),
+ (0x1D5D6, "M", "c"),
+ (0x1D5D7, "M", "d"),
+ (0x1D5D8, "M", "e"),
+ (0x1D5D9, "M", "f"),
+ (0x1D5DA, "M", "g"),
+ (0x1D5DB, "M", "h"),
+ (0x1D5DC, "M", "i"),
+ (0x1D5DD, "M", "j"),
+ (0x1D5DE, "M", "k"),
+ (0x1D5DF, "M", "l"),
+ (0x1D5E0, "M", "m"),
+ (0x1D5E1, "M", "n"),
+ (0x1D5E2, "M", "o"),
+ (0x1D5E3, "M", "p"),
+ (0x1D5E4, "M", "q"),
+ (0x1D5E5, "M", "r"),
+ (0x1D5E6, "M", "s"),
+ (0x1D5E7, "M", "t"),
+ (0x1D5E8, "M", "u"),
+ (0x1D5E9, "M", "v"),
+ (0x1D5EA, "M", "w"),
+ (0x1D5EB, "M", "x"),
+ (0x1D5EC, "M", "y"),
+ (0x1D5ED, "M", "z"),
+ (0x1D5EE, "M", "a"),
+ (0x1D5EF, "M", "b"),
+ (0x1D5F0, "M", "c"),
+ (0x1D5F1, "M", "d"),
+ (0x1D5F2, "M", "e"),
+ (0x1D5F3, "M", "f"),
+ (0x1D5F4, "M", "g"),
+ (0x1D5F5, "M", "h"),
+ (0x1D5F6, "M", "i"),
+ (0x1D5F7, "M", "j"),
+ (0x1D5F8, "M", "k"),
+ (0x1D5F9, "M", "l"),
+ (0x1D5FA, "M", "m"),
+ (0x1D5FB, "M", "n"),
+ (0x1D5FC, "M", "o"),
+ (0x1D5FD, "M", "p"),
+ (0x1D5FE, "M", "q"),
+ (0x1D5FF, "M", "r"),
+ (0x1D600, "M", "s"),
+ (0x1D601, "M", "t"),
+ (0x1D602, "M", "u"),
+ (0x1D603, "M", "v"),
+ (0x1D604, "M", "w"),
+ (0x1D605, "M", "x"),
+ (0x1D606, "M", "y"),
+ (0x1D607, "M", "z"),
+ (0x1D608, "M", "a"),
+ (0x1D609, "M", "b"),
+ (0x1D60A, "M", "c"),
+ (0x1D60B, "M", "d"),
+ (0x1D60C, "M", "e"),
+ (0x1D60D, "M", "f"),
+ (0x1D60E, "M", "g"),
+ (0x1D60F, "M", "h"),
+ (0x1D610, "M", "i"),
+ (0x1D611, "M", "j"),
+ (0x1D612, "M", "k"),
+ (0x1D613, "M", "l"),
+ (0x1D614, "M", "m"),
+ (0x1D615, "M", "n"),
+ (0x1D616, "M", "o"),
+ (0x1D617, "M", "p"),
+ (0x1D618, "M", "q"),
+ (0x1D619, "M", "r"),
+ (0x1D61A, "M", "s"),
+ (0x1D61B, "M", "t"),
+ (0x1D61C, "M", "u"),
+ (0x1D61D, "M", "v"),
+ (0x1D61E, "M", "w"),
+ ]
+
+
+def _seg_66() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D61F, "M", "x"),
+ (0x1D620, "M", "y"),
+ (0x1D621, "M", "z"),
+ (0x1D622, "M", "a"),
+ (0x1D623, "M", "b"),
+ (0x1D624, "M", "c"),
+ (0x1D625, "M", "d"),
+ (0x1D626, "M", "e"),
+ (0x1D627, "M", "f"),
+ (0x1D628, "M", "g"),
+ (0x1D629, "M", "h"),
+ (0x1D62A, "M", "i"),
+ (0x1D62B, "M", "j"),
+ (0x1D62C, "M", "k"),
+ (0x1D62D, "M", "l"),
+ (0x1D62E, "M", "m"),
+ (0x1D62F, "M", "n"),
+ (0x1D630, "M", "o"),
+ (0x1D631, "M", "p"),
+ (0x1D632, "M", "q"),
+ (0x1D633, "M", "r"),
+ (0x1D634, "M", "s"),
+ (0x1D635, "M", "t"),
+ (0x1D636, "M", "u"),
+ (0x1D637, "M", "v"),
+ (0x1D638, "M", "w"),
+ (0x1D639, "M", "x"),
+ (0x1D63A, "M", "y"),
+ (0x1D63B, "M", "z"),
+ (0x1D63C, "M", "a"),
+ (0x1D63D, "M", "b"),
+ (0x1D63E, "M", "c"),
+ (0x1D63F, "M", "d"),
+ (0x1D640, "M", "e"),
+ (0x1D641, "M", "f"),
+ (0x1D642, "M", "g"),
+ (0x1D643, "M", "h"),
+ (0x1D644, "M", "i"),
+ (0x1D645, "M", "j"),
+ (0x1D646, "M", "k"),
+ (0x1D647, "M", "l"),
+ (0x1D648, "M", "m"),
+ (0x1D649, "M", "n"),
+ (0x1D64A, "M", "o"),
+ (0x1D64B, "M", "p"),
+ (0x1D64C, "M", "q"),
+ (0x1D64D, "M", "r"),
+ (0x1D64E, "M", "s"),
+ (0x1D64F, "M", "t"),
+ (0x1D650, "M", "u"),
+ (0x1D651, "M", "v"),
+ (0x1D652, "M", "w"),
+ (0x1D653, "M", "x"),
+ (0x1D654, "M", "y"),
+ (0x1D655, "M", "z"),
+ (0x1D656, "M", "a"),
+ (0x1D657, "M", "b"),
+ (0x1D658, "M", "c"),
+ (0x1D659, "M", "d"),
+ (0x1D65A, "M", "e"),
+ (0x1D65B, "M", "f"),
+ (0x1D65C, "M", "g"),
+ (0x1D65D, "M", "h"),
+ (0x1D65E, "M", "i"),
+ (0x1D65F, "M", "j"),
+ (0x1D660, "M", "k"),
+ (0x1D661, "M", "l"),
+ (0x1D662, "M", "m"),
+ (0x1D663, "M", "n"),
+ (0x1D664, "M", "o"),
+ (0x1D665, "M", "p"),
+ (0x1D666, "M", "q"),
+ (0x1D667, "M", "r"),
+ (0x1D668, "M", "s"),
+ (0x1D669, "M", "t"),
+ (0x1D66A, "M", "u"),
+ (0x1D66B, "M", "v"),
+ (0x1D66C, "M", "w"),
+ (0x1D66D, "M", "x"),
+ (0x1D66E, "M", "y"),
+ (0x1D66F, "M", "z"),
+ (0x1D670, "M", "a"),
+ (0x1D671, "M", "b"),
+ (0x1D672, "M", "c"),
+ (0x1D673, "M", "d"),
+ (0x1D674, "M", "e"),
+ (0x1D675, "M", "f"),
+ (0x1D676, "M", "g"),
+ (0x1D677, "M", "h"),
+ (0x1D678, "M", "i"),
+ (0x1D679, "M", "j"),
+ (0x1D67A, "M", "k"),
+ (0x1D67B, "M", "l"),
+ (0x1D67C, "M", "m"),
+ (0x1D67D, "M", "n"),
+ (0x1D67E, "M", "o"),
+ (0x1D67F, "M", "p"),
+ (0x1D680, "M", "q"),
+ (0x1D681, "M", "r"),
+ (0x1D682, "M", "s"),
+ ]
+
+
+def _seg_67() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D683, "M", "t"),
+ (0x1D684, "M", "u"),
+ (0x1D685, "M", "v"),
+ (0x1D686, "M", "w"),
+ (0x1D687, "M", "x"),
+ (0x1D688, "M", "y"),
+ (0x1D689, "M", "z"),
+ (0x1D68A, "M", "a"),
+ (0x1D68B, "M", "b"),
+ (0x1D68C, "M", "c"),
+ (0x1D68D, "M", "d"),
+ (0x1D68E, "M", "e"),
+ (0x1D68F, "M", "f"),
+ (0x1D690, "M", "g"),
+ (0x1D691, "M", "h"),
+ (0x1D692, "M", "i"),
+ (0x1D693, "M", "j"),
+ (0x1D694, "M", "k"),
+ (0x1D695, "M", "l"),
+ (0x1D696, "M", "m"),
+ (0x1D697, "M", "n"),
+ (0x1D698, "M", "o"),
+ (0x1D699, "M", "p"),
+ (0x1D69A, "M", "q"),
+ (0x1D69B, "M", "r"),
+ (0x1D69C, "M", "s"),
+ (0x1D69D, "M", "t"),
+ (0x1D69E, "M", "u"),
+ (0x1D69F, "M", "v"),
+ (0x1D6A0, "M", "w"),
+ (0x1D6A1, "M", "x"),
+ (0x1D6A2, "M", "y"),
+ (0x1D6A3, "M", "z"),
+ (0x1D6A4, "M", "ı"),
+ (0x1D6A5, "M", "ȷ"),
+ (0x1D6A6, "X"),
+ (0x1D6A8, "M", "α"),
+ (0x1D6A9, "M", "β"),
+ (0x1D6AA, "M", "γ"),
+ (0x1D6AB, "M", "δ"),
+ (0x1D6AC, "M", "ε"),
+ (0x1D6AD, "M", "ζ"),
+ (0x1D6AE, "M", "η"),
+ (0x1D6AF, "M", "θ"),
+ (0x1D6B0, "M", "ι"),
+ (0x1D6B1, "M", "κ"),
+ (0x1D6B2, "M", "λ"),
+ (0x1D6B3, "M", "μ"),
+ (0x1D6B4, "M", "ν"),
+ (0x1D6B5, "M", "ξ"),
+ (0x1D6B6, "M", "ο"),
+ (0x1D6B7, "M", "π"),
+ (0x1D6B8, "M", "ρ"),
+ (0x1D6B9, "M", "θ"),
+ (0x1D6BA, "M", "σ"),
+ (0x1D6BB, "M", "τ"),
+ (0x1D6BC, "M", "υ"),
+ (0x1D6BD, "M", "φ"),
+ (0x1D6BE, "M", "χ"),
+ (0x1D6BF, "M", "ψ"),
+ (0x1D6C0, "M", "ω"),
+ (0x1D6C1, "M", "∇"),
+ (0x1D6C2, "M", "α"),
+ (0x1D6C3, "M", "β"),
+ (0x1D6C4, "M", "γ"),
+ (0x1D6C5, "M", "δ"),
+ (0x1D6C6, "M", "ε"),
+ (0x1D6C7, "M", "ζ"),
+ (0x1D6C8, "M", "η"),
+ (0x1D6C9, "M", "θ"),
+ (0x1D6CA, "M", "ι"),
+ (0x1D6CB, "M", "κ"),
+ (0x1D6CC, "M", "λ"),
+ (0x1D6CD, "M", "μ"),
+ (0x1D6CE, "M", "ν"),
+ (0x1D6CF, "M", "ξ"),
+ (0x1D6D0, "M", "ο"),
+ (0x1D6D1, "M", "π"),
+ (0x1D6D2, "M", "ρ"),
+ (0x1D6D3, "M", "σ"),
+ (0x1D6D5, "M", "τ"),
+ (0x1D6D6, "M", "υ"),
+ (0x1D6D7, "M", "φ"),
+ (0x1D6D8, "M", "χ"),
+ (0x1D6D9, "M", "ψ"),
+ (0x1D6DA, "M", "ω"),
+ (0x1D6DB, "M", "∂"),
+ (0x1D6DC, "M", "ε"),
+ (0x1D6DD, "M", "θ"),
+ (0x1D6DE, "M", "κ"),
+ (0x1D6DF, "M", "φ"),
+ (0x1D6E0, "M", "ρ"),
+ (0x1D6E1, "M", "π"),
+ (0x1D6E2, "M", "α"),
+ (0x1D6E3, "M", "β"),
+ (0x1D6E4, "M", "γ"),
+ (0x1D6E5, "M", "δ"),
+ (0x1D6E6, "M", "ε"),
+ (0x1D6E7, "M", "ζ"),
+ (0x1D6E8, "M", "η"),
+ ]
+
+
+def _seg_68() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D6E9, "M", "θ"),
+ (0x1D6EA, "M", "ι"),
+ (0x1D6EB, "M", "κ"),
+ (0x1D6EC, "M", "λ"),
+ (0x1D6ED, "M", "μ"),
+ (0x1D6EE, "M", "ν"),
+ (0x1D6EF, "M", "ξ"),
+ (0x1D6F0, "M", "ο"),
+ (0x1D6F1, "M", "π"),
+ (0x1D6F2, "M", "ρ"),
+ (0x1D6F3, "M", "θ"),
+ (0x1D6F4, "M", "σ"),
+ (0x1D6F5, "M", "τ"),
+ (0x1D6F6, "M", "υ"),
+ (0x1D6F7, "M", "φ"),
+ (0x1D6F8, "M", "χ"),
+ (0x1D6F9, "M", "ψ"),
+ (0x1D6FA, "M", "ω"),
+ (0x1D6FB, "M", "∇"),
+ (0x1D6FC, "M", "α"),
+ (0x1D6FD, "M", "β"),
+ (0x1D6FE, "M", "γ"),
+ (0x1D6FF, "M", "δ"),
+ (0x1D700, "M", "ε"),
+ (0x1D701, "M", "ζ"),
+ (0x1D702, "M", "η"),
+ (0x1D703, "M", "θ"),
+ (0x1D704, "M", "ι"),
+ (0x1D705, "M", "κ"),
+ (0x1D706, "M", "λ"),
+ (0x1D707, "M", "μ"),
+ (0x1D708, "M", "ν"),
+ (0x1D709, "M", "ξ"),
+ (0x1D70A, "M", "ο"),
+ (0x1D70B, "M", "π"),
+ (0x1D70C, "M", "ρ"),
+ (0x1D70D, "M", "σ"),
+ (0x1D70F, "M", "τ"),
+ (0x1D710, "M", "υ"),
+ (0x1D711, "M", "φ"),
+ (0x1D712, "M", "χ"),
+ (0x1D713, "M", "ψ"),
+ (0x1D714, "M", "ω"),
+ (0x1D715, "M", "∂"),
+ (0x1D716, "M", "ε"),
+ (0x1D717, "M", "θ"),
+ (0x1D718, "M", "κ"),
+ (0x1D719, "M", "φ"),
+ (0x1D71A, "M", "ρ"),
+ (0x1D71B, "M", "π"),
+ (0x1D71C, "M", "α"),
+ (0x1D71D, "M", "β"),
+ (0x1D71E, "M", "γ"),
+ (0x1D71F, "M", "δ"),
+ (0x1D720, "M", "ε"),
+ (0x1D721, "M", "ζ"),
+ (0x1D722, "M", "η"),
+ (0x1D723, "M", "θ"),
+ (0x1D724, "M", "ι"),
+ (0x1D725, "M", "κ"),
+ (0x1D726, "M", "λ"),
+ (0x1D727, "M", "μ"),
+ (0x1D728, "M", "ν"),
+ (0x1D729, "M", "ξ"),
+ (0x1D72A, "M", "ο"),
+ (0x1D72B, "M", "π"),
+ (0x1D72C, "M", "ρ"),
+ (0x1D72D, "M", "θ"),
+ (0x1D72E, "M", "σ"),
+ (0x1D72F, "M", "τ"),
+ (0x1D730, "M", "υ"),
+ (0x1D731, "M", "φ"),
+ (0x1D732, "M", "χ"),
+ (0x1D733, "M", "ψ"),
+ (0x1D734, "M", "ω"),
+ (0x1D735, "M", "∇"),
+ (0x1D736, "M", "α"),
+ (0x1D737, "M", "β"),
+ (0x1D738, "M", "γ"),
+ (0x1D739, "M", "δ"),
+ (0x1D73A, "M", "ε"),
+ (0x1D73B, "M", "ζ"),
+ (0x1D73C, "M", "η"),
+ (0x1D73D, "M", "θ"),
+ (0x1D73E, "M", "ι"),
+ (0x1D73F, "M", "κ"),
+ (0x1D740, "M", "λ"),
+ (0x1D741, "M", "μ"),
+ (0x1D742, "M", "ν"),
+ (0x1D743, "M", "ξ"),
+ (0x1D744, "M", "ο"),
+ (0x1D745, "M", "π"),
+ (0x1D746, "M", "ρ"),
+ (0x1D747, "M", "σ"),
+ (0x1D749, "M", "τ"),
+ (0x1D74A, "M", "υ"),
+ (0x1D74B, "M", "φ"),
+ (0x1D74C, "M", "χ"),
+ (0x1D74D, "M", "ψ"),
+ (0x1D74E, "M", "ω"),
+ ]
+
+
+def _seg_69() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D74F, "M", "∂"),
+ (0x1D750, "M", "ε"),
+ (0x1D751, "M", "θ"),
+ (0x1D752, "M", "κ"),
+ (0x1D753, "M", "φ"),
+ (0x1D754, "M", "ρ"),
+ (0x1D755, "M", "π"),
+ (0x1D756, "M", "α"),
+ (0x1D757, "M", "β"),
+ (0x1D758, "M", "γ"),
+ (0x1D759, "M", "δ"),
+ (0x1D75A, "M", "ε"),
+ (0x1D75B, "M", "ζ"),
+ (0x1D75C, "M", "η"),
+ (0x1D75D, "M", "θ"),
+ (0x1D75E, "M", "ι"),
+ (0x1D75F, "M", "κ"),
+ (0x1D760, "M", "λ"),
+ (0x1D761, "M", "μ"),
+ (0x1D762, "M", "ν"),
+ (0x1D763, "M", "ξ"),
+ (0x1D764, "M", "ο"),
+ (0x1D765, "M", "π"),
+ (0x1D766, "M", "ρ"),
+ (0x1D767, "M", "θ"),
+ (0x1D768, "M", "σ"),
+ (0x1D769, "M", "τ"),
+ (0x1D76A, "M", "υ"),
+ (0x1D76B, "M", "φ"),
+ (0x1D76C, "M", "χ"),
+ (0x1D76D, "M", "ψ"),
+ (0x1D76E, "M", "ω"),
+ (0x1D76F, "M", "∇"),
+ (0x1D770, "M", "α"),
+ (0x1D771, "M", "β"),
+ (0x1D772, "M", "γ"),
+ (0x1D773, "M", "δ"),
+ (0x1D774, "M", "ε"),
+ (0x1D775, "M", "ζ"),
+ (0x1D776, "M", "η"),
+ (0x1D777, "M", "θ"),
+ (0x1D778, "M", "ι"),
+ (0x1D779, "M", "κ"),
+ (0x1D77A, "M", "λ"),
+ (0x1D77B, "M", "μ"),
+ (0x1D77C, "M", "ν"),
+ (0x1D77D, "M", "ξ"),
+ (0x1D77E, "M", "ο"),
+ (0x1D77F, "M", "π"),
+ (0x1D780, "M", "ρ"),
+ (0x1D781, "M", "σ"),
+ (0x1D783, "M", "τ"),
+ (0x1D784, "M", "υ"),
+ (0x1D785, "M", "φ"),
+ (0x1D786, "M", "χ"),
+ (0x1D787, "M", "ψ"),
+ (0x1D788, "M", "ω"),
+ (0x1D789, "M", "∂"),
+ (0x1D78A, "M", "ε"),
+ (0x1D78B, "M", "θ"),
+ (0x1D78C, "M", "κ"),
+ (0x1D78D, "M", "φ"),
+ (0x1D78E, "M", "ρ"),
+ (0x1D78F, "M", "π"),
+ (0x1D790, "M", "α"),
+ (0x1D791, "M", "β"),
+ (0x1D792, "M", "γ"),
+ (0x1D793, "M", "δ"),
+ (0x1D794, "M", "ε"),
+ (0x1D795, "M", "ζ"),
+ (0x1D796, "M", "η"),
+ (0x1D797, "M", "θ"),
+ (0x1D798, "M", "ι"),
+ (0x1D799, "M", "κ"),
+ (0x1D79A, "M", "λ"),
+ (0x1D79B, "M", "μ"),
+ (0x1D79C, "M", "ν"),
+ (0x1D79D, "M", "ξ"),
+ (0x1D79E, "M", "ο"),
+ (0x1D79F, "M", "π"),
+ (0x1D7A0, "M", "ρ"),
+ (0x1D7A1, "M", "θ"),
+ (0x1D7A2, "M", "σ"),
+ (0x1D7A3, "M", "τ"),
+ (0x1D7A4, "M", "υ"),
+ (0x1D7A5, "M", "φ"),
+ (0x1D7A6, "M", "χ"),
+ (0x1D7A7, "M", "ψ"),
+ (0x1D7A8, "M", "ω"),
+ (0x1D7A9, "M", "∇"),
+ (0x1D7AA, "M", "α"),
+ (0x1D7AB, "M", "β"),
+ (0x1D7AC, "M", "γ"),
+ (0x1D7AD, "M", "δ"),
+ (0x1D7AE, "M", "ε"),
+ (0x1D7AF, "M", "ζ"),
+ (0x1D7B0, "M", "η"),
+ (0x1D7B1, "M", "θ"),
+ (0x1D7B2, "M", "ι"),
+ (0x1D7B3, "M", "κ"),
+ ]
+
+
+def _seg_70() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1D7B4, "M", "λ"),
+ (0x1D7B5, "M", "μ"),
+ (0x1D7B6, "M", "ν"),
+ (0x1D7B7, "M", "ξ"),
+ (0x1D7B8, "M", "ο"),
+ (0x1D7B9, "M", "π"),
+ (0x1D7BA, "M", "ρ"),
+ (0x1D7BB, "M", "σ"),
+ (0x1D7BD, "M", "τ"),
+ (0x1D7BE, "M", "υ"),
+ (0x1D7BF, "M", "φ"),
+ (0x1D7C0, "M", "χ"),
+ (0x1D7C1, "M", "ψ"),
+ (0x1D7C2, "M", "ω"),
+ (0x1D7C3, "M", "∂"),
+ (0x1D7C4, "M", "ε"),
+ (0x1D7C5, "M", "θ"),
+ (0x1D7C6, "M", "κ"),
+ (0x1D7C7, "M", "φ"),
+ (0x1D7C8, "M", "ρ"),
+ (0x1D7C9, "M", "π"),
+ (0x1D7CA, "M", "ϝ"),
+ (0x1D7CC, "X"),
+ (0x1D7CE, "M", "0"),
+ (0x1D7CF, "M", "1"),
+ (0x1D7D0, "M", "2"),
+ (0x1D7D1, "M", "3"),
+ (0x1D7D2, "M", "4"),
+ (0x1D7D3, "M", "5"),
+ (0x1D7D4, "M", "6"),
+ (0x1D7D5, "M", "7"),
+ (0x1D7D6, "M", "8"),
+ (0x1D7D7, "M", "9"),
+ (0x1D7D8, "M", "0"),
+ (0x1D7D9, "M", "1"),
+ (0x1D7DA, "M", "2"),
+ (0x1D7DB, "M", "3"),
+ (0x1D7DC, "M", "4"),
+ (0x1D7DD, "M", "5"),
+ (0x1D7DE, "M", "6"),
+ (0x1D7DF, "M", "7"),
+ (0x1D7E0, "M", "8"),
+ (0x1D7E1, "M", "9"),
+ (0x1D7E2, "M", "0"),
+ (0x1D7E3, "M", "1"),
+ (0x1D7E4, "M", "2"),
+ (0x1D7E5, "M", "3"),
+ (0x1D7E6, "M", "4"),
+ (0x1D7E7, "M", "5"),
+ (0x1D7E8, "M", "6"),
+ (0x1D7E9, "M", "7"),
+ (0x1D7EA, "M", "8"),
+ (0x1D7EB, "M", "9"),
+ (0x1D7EC, "M", "0"),
+ (0x1D7ED, "M", "1"),
+ (0x1D7EE, "M", "2"),
+ (0x1D7EF, "M", "3"),
+ (0x1D7F0, "M", "4"),
+ (0x1D7F1, "M", "5"),
+ (0x1D7F2, "M", "6"),
+ (0x1D7F3, "M", "7"),
+ (0x1D7F4, "M", "8"),
+ (0x1D7F5, "M", "9"),
+ (0x1D7F6, "M", "0"),
+ (0x1D7F7, "M", "1"),
+ (0x1D7F8, "M", "2"),
+ (0x1D7F9, "M", "3"),
+ (0x1D7FA, "M", "4"),
+ (0x1D7FB, "M", "5"),
+ (0x1D7FC, "M", "6"),
+ (0x1D7FD, "M", "7"),
+ (0x1D7FE, "M", "8"),
+ (0x1D7FF, "M", "9"),
+ (0x1D800, "V"),
+ (0x1DA8C, "X"),
+ (0x1DA9B, "V"),
+ (0x1DAA0, "X"),
+ (0x1DAA1, "V"),
+ (0x1DAB0, "X"),
+ (0x1DF00, "V"),
+ (0x1DF1F, "X"),
+ (0x1DF25, "V"),
+ (0x1DF2B, "X"),
+ (0x1E000, "V"),
+ (0x1E007, "X"),
+ (0x1E008, "V"),
+ (0x1E019, "X"),
+ (0x1E01B, "V"),
+ (0x1E022, "X"),
+ (0x1E023, "V"),
+ (0x1E025, "X"),
+ (0x1E026, "V"),
+ (0x1E02B, "X"),
+ (0x1E030, "M", "а"),
+ (0x1E031, "M", "б"),
+ (0x1E032, "M", "в"),
+ (0x1E033, "M", "г"),
+ (0x1E034, "M", "д"),
+ (0x1E035, "M", "е"),
+ (0x1E036, "M", "ж"),
+ ]
+
+
+def _seg_71() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1E037, "M", "з"),
+ (0x1E038, "M", "и"),
+ (0x1E039, "M", "к"),
+ (0x1E03A, "M", "л"),
+ (0x1E03B, "M", "м"),
+ (0x1E03C, "M", "о"),
+ (0x1E03D, "M", "п"),
+ (0x1E03E, "M", "р"),
+ (0x1E03F, "M", "с"),
+ (0x1E040, "M", "т"),
+ (0x1E041, "M", "у"),
+ (0x1E042, "M", "ф"),
+ (0x1E043, "M", "х"),
+ (0x1E044, "M", "ц"),
+ (0x1E045, "M", "ч"),
+ (0x1E046, "M", "ш"),
+ (0x1E047, "M", "ы"),
+ (0x1E048, "M", "э"),
+ (0x1E049, "M", "ю"),
+ (0x1E04A, "M", "ꚉ"),
+ (0x1E04B, "M", "ә"),
+ (0x1E04C, "M", "і"),
+ (0x1E04D, "M", "ј"),
+ (0x1E04E, "M", "ө"),
+ (0x1E04F, "M", "ү"),
+ (0x1E050, "M", "ӏ"),
+ (0x1E051, "M", "а"),
+ (0x1E052, "M", "б"),
+ (0x1E053, "M", "в"),
+ (0x1E054, "M", "г"),
+ (0x1E055, "M", "д"),
+ (0x1E056, "M", "е"),
+ (0x1E057, "M", "ж"),
+ (0x1E058, "M", "з"),
+ (0x1E059, "M", "и"),
+ (0x1E05A, "M", "к"),
+ (0x1E05B, "M", "л"),
+ (0x1E05C, "M", "о"),
+ (0x1E05D, "M", "п"),
+ (0x1E05E, "M", "с"),
+ (0x1E05F, "M", "у"),
+ (0x1E060, "M", "ф"),
+ (0x1E061, "M", "х"),
+ (0x1E062, "M", "ц"),
+ (0x1E063, "M", "ч"),
+ (0x1E064, "M", "ш"),
+ (0x1E065, "M", "ъ"),
+ (0x1E066, "M", "ы"),
+ (0x1E067, "M", "ґ"),
+ (0x1E068, "M", "і"),
+ (0x1E069, "M", "ѕ"),
+ (0x1E06A, "M", "џ"),
+ (0x1E06B, "M", "ҫ"),
+ (0x1E06C, "M", "ꙑ"),
+ (0x1E06D, "M", "ұ"),
+ (0x1E06E, "X"),
+ (0x1E08F, "V"),
+ (0x1E090, "X"),
+ (0x1E100, "V"),
+ (0x1E12D, "X"),
+ (0x1E130, "V"),
+ (0x1E13E, "X"),
+ (0x1E140, "V"),
+ (0x1E14A, "X"),
+ (0x1E14E, "V"),
+ (0x1E150, "X"),
+ (0x1E290, "V"),
+ (0x1E2AF, "X"),
+ (0x1E2C0, "V"),
+ (0x1E2FA, "X"),
+ (0x1E2FF, "V"),
+ (0x1E300, "X"),
+ (0x1E4D0, "V"),
+ (0x1E4FA, "X"),
+ (0x1E7E0, "V"),
+ (0x1E7E7, "X"),
+ (0x1E7E8, "V"),
+ (0x1E7EC, "X"),
+ (0x1E7ED, "V"),
+ (0x1E7EF, "X"),
+ (0x1E7F0, "V"),
+ (0x1E7FF, "X"),
+ (0x1E800, "V"),
+ (0x1E8C5, "X"),
+ (0x1E8C7, "V"),
+ (0x1E8D7, "X"),
+ (0x1E900, "M", "𞤢"),
+ (0x1E901, "M", "𞤣"),
+ (0x1E902, "M", "𞤤"),
+ (0x1E903, "M", "𞤥"),
+ (0x1E904, "M", "𞤦"),
+ (0x1E905, "M", "𞤧"),
+ (0x1E906, "M", "𞤨"),
+ (0x1E907, "M", "𞤩"),
+ (0x1E908, "M", "𞤪"),
+ (0x1E909, "M", "𞤫"),
+ (0x1E90A, "M", "𞤬"),
+ (0x1E90B, "M", "𞤭"),
+ (0x1E90C, "M", "𞤮"),
+ (0x1E90D, "M", "𞤯"),
+ ]
+
+
+def _seg_72() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1E90E, "M", "𞤰"),
+ (0x1E90F, "M", "𞤱"),
+ (0x1E910, "M", "𞤲"),
+ (0x1E911, "M", "𞤳"),
+ (0x1E912, "M", "𞤴"),
+ (0x1E913, "M", "𞤵"),
+ (0x1E914, "M", "𞤶"),
+ (0x1E915, "M", "𞤷"),
+ (0x1E916, "M", "𞤸"),
+ (0x1E917, "M", "𞤹"),
+ (0x1E918, "M", "𞤺"),
+ (0x1E919, "M", "𞤻"),
+ (0x1E91A, "M", "𞤼"),
+ (0x1E91B, "M", "𞤽"),
+ (0x1E91C, "M", "𞤾"),
+ (0x1E91D, "M", "𞤿"),
+ (0x1E91E, "M", "𞥀"),
+ (0x1E91F, "M", "𞥁"),
+ (0x1E920, "M", "𞥂"),
+ (0x1E921, "M", "𞥃"),
+ (0x1E922, "V"),
+ (0x1E94C, "X"),
+ (0x1E950, "V"),
+ (0x1E95A, "X"),
+ (0x1E95E, "V"),
+ (0x1E960, "X"),
+ (0x1EC71, "V"),
+ (0x1ECB5, "X"),
+ (0x1ED01, "V"),
+ (0x1ED3E, "X"),
+ (0x1EE00, "M", "ا"),
+ (0x1EE01, "M", "ب"),
+ (0x1EE02, "M", "ج"),
+ (0x1EE03, "M", "د"),
+ (0x1EE04, "X"),
+ (0x1EE05, "M", "و"),
+ (0x1EE06, "M", "ز"),
+ (0x1EE07, "M", "ح"),
+ (0x1EE08, "M", "ط"),
+ (0x1EE09, "M", "ي"),
+ (0x1EE0A, "M", "ك"),
+ (0x1EE0B, "M", "ل"),
+ (0x1EE0C, "M", "م"),
+ (0x1EE0D, "M", "ن"),
+ (0x1EE0E, "M", "س"),
+ (0x1EE0F, "M", "ع"),
+ (0x1EE10, "M", "ف"),
+ (0x1EE11, "M", "ص"),
+ (0x1EE12, "M", "ق"),
+ (0x1EE13, "M", "ر"),
+ (0x1EE14, "M", "ش"),
+ (0x1EE15, "M", "ت"),
+ (0x1EE16, "M", "ث"),
+ (0x1EE17, "M", "خ"),
+ (0x1EE18, "M", "ذ"),
+ (0x1EE19, "M", "ض"),
+ (0x1EE1A, "M", "ظ"),
+ (0x1EE1B, "M", "غ"),
+ (0x1EE1C, "M", "ٮ"),
+ (0x1EE1D, "M", "ں"),
+ (0x1EE1E, "M", "ڡ"),
+ (0x1EE1F, "M", "ٯ"),
+ (0x1EE20, "X"),
+ (0x1EE21, "M", "ب"),
+ (0x1EE22, "M", "ج"),
+ (0x1EE23, "X"),
+ (0x1EE24, "M", "ه"),
+ (0x1EE25, "X"),
+ (0x1EE27, "M", "ح"),
+ (0x1EE28, "X"),
+ (0x1EE29, "M", "ي"),
+ (0x1EE2A, "M", "ك"),
+ (0x1EE2B, "M", "ل"),
+ (0x1EE2C, "M", "م"),
+ (0x1EE2D, "M", "ن"),
+ (0x1EE2E, "M", "س"),
+ (0x1EE2F, "M", "ع"),
+ (0x1EE30, "M", "ف"),
+ (0x1EE31, "M", "ص"),
+ (0x1EE32, "M", "ق"),
+ (0x1EE33, "X"),
+ (0x1EE34, "M", "ش"),
+ (0x1EE35, "M", "ت"),
+ (0x1EE36, "M", "ث"),
+ (0x1EE37, "M", "خ"),
+ (0x1EE38, "X"),
+ (0x1EE39, "M", "ض"),
+ (0x1EE3A, "X"),
+ (0x1EE3B, "M", "غ"),
+ (0x1EE3C, "X"),
+ (0x1EE42, "M", "ج"),
+ (0x1EE43, "X"),
+ (0x1EE47, "M", "ح"),
+ (0x1EE48, "X"),
+ (0x1EE49, "M", "ي"),
+ (0x1EE4A, "X"),
+ (0x1EE4B, "M", "ل"),
+ (0x1EE4C, "X"),
+ (0x1EE4D, "M", "ن"),
+ (0x1EE4E, "M", "س"),
+ ]
+
+
+def _seg_73() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1EE4F, "M", "ع"),
+ (0x1EE50, "X"),
+ (0x1EE51, "M", "ص"),
+ (0x1EE52, "M", "ق"),
+ (0x1EE53, "X"),
+ (0x1EE54, "M", "ش"),
+ (0x1EE55, "X"),
+ (0x1EE57, "M", "خ"),
+ (0x1EE58, "X"),
+ (0x1EE59, "M", "ض"),
+ (0x1EE5A, "X"),
+ (0x1EE5B, "M", "غ"),
+ (0x1EE5C, "X"),
+ (0x1EE5D, "M", "ں"),
+ (0x1EE5E, "X"),
+ (0x1EE5F, "M", "ٯ"),
+ (0x1EE60, "X"),
+ (0x1EE61, "M", "ب"),
+ (0x1EE62, "M", "ج"),
+ (0x1EE63, "X"),
+ (0x1EE64, "M", "ه"),
+ (0x1EE65, "X"),
+ (0x1EE67, "M", "ح"),
+ (0x1EE68, "M", "ط"),
+ (0x1EE69, "M", "ي"),
+ (0x1EE6A, "M", "ك"),
+ (0x1EE6B, "X"),
+ (0x1EE6C, "M", "م"),
+ (0x1EE6D, "M", "ن"),
+ (0x1EE6E, "M", "س"),
+ (0x1EE6F, "M", "ع"),
+ (0x1EE70, "M", "ف"),
+ (0x1EE71, "M", "ص"),
+ (0x1EE72, "M", "ق"),
+ (0x1EE73, "X"),
+ (0x1EE74, "M", "ش"),
+ (0x1EE75, "M", "ت"),
+ (0x1EE76, "M", "ث"),
+ (0x1EE77, "M", "خ"),
+ (0x1EE78, "X"),
+ (0x1EE79, "M", "ض"),
+ (0x1EE7A, "M", "ظ"),
+ (0x1EE7B, "M", "غ"),
+ (0x1EE7C, "M", "ٮ"),
+ (0x1EE7D, "X"),
+ (0x1EE7E, "M", "ڡ"),
+ (0x1EE7F, "X"),
+ (0x1EE80, "M", "ا"),
+ (0x1EE81, "M", "ب"),
+ (0x1EE82, "M", "ج"),
+ (0x1EE83, "M", "د"),
+ (0x1EE84, "M", "ه"),
+ (0x1EE85, "M", "و"),
+ (0x1EE86, "M", "ز"),
+ (0x1EE87, "M", "ح"),
+ (0x1EE88, "M", "ط"),
+ (0x1EE89, "M", "ي"),
+ (0x1EE8A, "X"),
+ (0x1EE8B, "M", "ل"),
+ (0x1EE8C, "M", "م"),
+ (0x1EE8D, "M", "ن"),
+ (0x1EE8E, "M", "س"),
+ (0x1EE8F, "M", "ع"),
+ (0x1EE90, "M", "ف"),
+ (0x1EE91, "M", "ص"),
+ (0x1EE92, "M", "ق"),
+ (0x1EE93, "M", "ر"),
+ (0x1EE94, "M", "ش"),
+ (0x1EE95, "M", "ت"),
+ (0x1EE96, "M", "ث"),
+ (0x1EE97, "M", "خ"),
+ (0x1EE98, "M", "ذ"),
+ (0x1EE99, "M", "ض"),
+ (0x1EE9A, "M", "ظ"),
+ (0x1EE9B, "M", "غ"),
+ (0x1EE9C, "X"),
+ (0x1EEA1, "M", "ب"),
+ (0x1EEA2, "M", "ج"),
+ (0x1EEA3, "M", "د"),
+ (0x1EEA4, "X"),
+ (0x1EEA5, "M", "و"),
+ (0x1EEA6, "M", "ز"),
+ (0x1EEA7, "M", "ح"),
+ (0x1EEA8, "M", "ط"),
+ (0x1EEA9, "M", "ي"),
+ (0x1EEAA, "X"),
+ (0x1EEAB, "M", "ل"),
+ (0x1EEAC, "M", "م"),
+ (0x1EEAD, "M", "ن"),
+ (0x1EEAE, "M", "س"),
+ (0x1EEAF, "M", "ع"),
+ (0x1EEB0, "M", "ف"),
+ (0x1EEB1, "M", "ص"),
+ (0x1EEB2, "M", "ق"),
+ (0x1EEB3, "M", "ر"),
+ (0x1EEB4, "M", "ش"),
+ (0x1EEB5, "M", "ت"),
+ (0x1EEB6, "M", "ث"),
+ (0x1EEB7, "M", "خ"),
+ (0x1EEB8, "M", "ذ"),
+ ]
+
+
+def _seg_74() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1EEB9, "M", "ض"),
+ (0x1EEBA, "M", "ظ"),
+ (0x1EEBB, "M", "غ"),
+ (0x1EEBC, "X"),
+ (0x1EEF0, "V"),
+ (0x1EEF2, "X"),
+ (0x1F000, "V"),
+ (0x1F02C, "X"),
+ (0x1F030, "V"),
+ (0x1F094, "X"),
+ (0x1F0A0, "V"),
+ (0x1F0AF, "X"),
+ (0x1F0B1, "V"),
+ (0x1F0C0, "X"),
+ (0x1F0C1, "V"),
+ (0x1F0D0, "X"),
+ (0x1F0D1, "V"),
+ (0x1F0F6, "X"),
+ (0x1F101, "3", "0,"),
+ (0x1F102, "3", "1,"),
+ (0x1F103, "3", "2,"),
+ (0x1F104, "3", "3,"),
+ (0x1F105, "3", "4,"),
+ (0x1F106, "3", "5,"),
+ (0x1F107, "3", "6,"),
+ (0x1F108, "3", "7,"),
+ (0x1F109, "3", "8,"),
+ (0x1F10A, "3", "9,"),
+ (0x1F10B, "V"),
+ (0x1F110, "3", "(a)"),
+ (0x1F111, "3", "(b)"),
+ (0x1F112, "3", "(c)"),
+ (0x1F113, "3", "(d)"),
+ (0x1F114, "3", "(e)"),
+ (0x1F115, "3", "(f)"),
+ (0x1F116, "3", "(g)"),
+ (0x1F117, "3", "(h)"),
+ (0x1F118, "3", "(i)"),
+ (0x1F119, "3", "(j)"),
+ (0x1F11A, "3", "(k)"),
+ (0x1F11B, "3", "(l)"),
+ (0x1F11C, "3", "(m)"),
+ (0x1F11D, "3", "(n)"),
+ (0x1F11E, "3", "(o)"),
+ (0x1F11F, "3", "(p)"),
+ (0x1F120, "3", "(q)"),
+ (0x1F121, "3", "(r)"),
+ (0x1F122, "3", "(s)"),
+ (0x1F123, "3", "(t)"),
+ (0x1F124, "3", "(u)"),
+ (0x1F125, "3", "(v)"),
+ (0x1F126, "3", "(w)"),
+ (0x1F127, "3", "(x)"),
+ (0x1F128, "3", "(y)"),
+ (0x1F129, "3", "(z)"),
+ (0x1F12A, "M", "〔s〕"),
+ (0x1F12B, "M", "c"),
+ (0x1F12C, "M", "r"),
+ (0x1F12D, "M", "cd"),
+ (0x1F12E, "M", "wz"),
+ (0x1F12F, "V"),
+ (0x1F130, "M", "a"),
+ (0x1F131, "M", "b"),
+ (0x1F132, "M", "c"),
+ (0x1F133, "M", "d"),
+ (0x1F134, "M", "e"),
+ (0x1F135, "M", "f"),
+ (0x1F136, "M", "g"),
+ (0x1F137, "M", "h"),
+ (0x1F138, "M", "i"),
+ (0x1F139, "M", "j"),
+ (0x1F13A, "M", "k"),
+ (0x1F13B, "M", "l"),
+ (0x1F13C, "M", "m"),
+ (0x1F13D, "M", "n"),
+ (0x1F13E, "M", "o"),
+ (0x1F13F, "M", "p"),
+ (0x1F140, "M", "q"),
+ (0x1F141, "M", "r"),
+ (0x1F142, "M", "s"),
+ (0x1F143, "M", "t"),
+ (0x1F144, "M", "u"),
+ (0x1F145, "M", "v"),
+ (0x1F146, "M", "w"),
+ (0x1F147, "M", "x"),
+ (0x1F148, "M", "y"),
+ (0x1F149, "M", "z"),
+ (0x1F14A, "M", "hv"),
+ (0x1F14B, "M", "mv"),
+ (0x1F14C, "M", "sd"),
+ (0x1F14D, "M", "ss"),
+ (0x1F14E, "M", "ppv"),
+ (0x1F14F, "M", "wc"),
+ (0x1F150, "V"),
+ (0x1F16A, "M", "mc"),
+ (0x1F16B, "M", "md"),
+ (0x1F16C, "M", "mr"),
+ (0x1F16D, "V"),
+ (0x1F190, "M", "dj"),
+ (0x1F191, "V"),
+ ]
+
+
+def _seg_75() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1F1AE, "X"),
+ (0x1F1E6, "V"),
+ (0x1F200, "M", "ほか"),
+ (0x1F201, "M", "ココ"),
+ (0x1F202, "M", "サ"),
+ (0x1F203, "X"),
+ (0x1F210, "M", "手"),
+ (0x1F211, "M", "字"),
+ (0x1F212, "M", "双"),
+ (0x1F213, "M", "デ"),
+ (0x1F214, "M", "二"),
+ (0x1F215, "M", "多"),
+ (0x1F216, "M", "解"),
+ (0x1F217, "M", "天"),
+ (0x1F218, "M", "交"),
+ (0x1F219, "M", "映"),
+ (0x1F21A, "M", "無"),
+ (0x1F21B, "M", "料"),
+ (0x1F21C, "M", "前"),
+ (0x1F21D, "M", "後"),
+ (0x1F21E, "M", "再"),
+ (0x1F21F, "M", "新"),
+ (0x1F220, "M", "初"),
+ (0x1F221, "M", "終"),
+ (0x1F222, "M", "生"),
+ (0x1F223, "M", "販"),
+ (0x1F224, "M", "声"),
+ (0x1F225, "M", "吹"),
+ (0x1F226, "M", "演"),
+ (0x1F227, "M", "投"),
+ (0x1F228, "M", "捕"),
+ (0x1F229, "M", "一"),
+ (0x1F22A, "M", "三"),
+ (0x1F22B, "M", "遊"),
+ (0x1F22C, "M", "左"),
+ (0x1F22D, "M", "中"),
+ (0x1F22E, "M", "右"),
+ (0x1F22F, "M", "指"),
+ (0x1F230, "M", "走"),
+ (0x1F231, "M", "打"),
+ (0x1F232, "M", "禁"),
+ (0x1F233, "M", "空"),
+ (0x1F234, "M", "合"),
+ (0x1F235, "M", "満"),
+ (0x1F236, "M", "有"),
+ (0x1F237, "M", "月"),
+ (0x1F238, "M", "申"),
+ (0x1F239, "M", "割"),
+ (0x1F23A, "M", "営"),
+ (0x1F23B, "M", "配"),
+ (0x1F23C, "X"),
+ (0x1F240, "M", "〔本〕"),
+ (0x1F241, "M", "〔三〕"),
+ (0x1F242, "M", "〔二〕"),
+ (0x1F243, "M", "〔安〕"),
+ (0x1F244, "M", "〔点〕"),
+ (0x1F245, "M", "〔打〕"),
+ (0x1F246, "M", "〔盗〕"),
+ (0x1F247, "M", "〔勝〕"),
+ (0x1F248, "M", "〔敗〕"),
+ (0x1F249, "X"),
+ (0x1F250, "M", "得"),
+ (0x1F251, "M", "可"),
+ (0x1F252, "X"),
+ (0x1F260, "V"),
+ (0x1F266, "X"),
+ (0x1F300, "V"),
+ (0x1F6D8, "X"),
+ (0x1F6DC, "V"),
+ (0x1F6ED, "X"),
+ (0x1F6F0, "V"),
+ (0x1F6FD, "X"),
+ (0x1F700, "V"),
+ (0x1F777, "X"),
+ (0x1F77B, "V"),
+ (0x1F7DA, "X"),
+ (0x1F7E0, "V"),
+ (0x1F7EC, "X"),
+ (0x1F7F0, "V"),
+ (0x1F7F1, "X"),
+ (0x1F800, "V"),
+ (0x1F80C, "X"),
+ (0x1F810, "V"),
+ (0x1F848, "X"),
+ (0x1F850, "V"),
+ (0x1F85A, "X"),
+ (0x1F860, "V"),
+ (0x1F888, "X"),
+ (0x1F890, "V"),
+ (0x1F8AE, "X"),
+ (0x1F8B0, "V"),
+ (0x1F8B2, "X"),
+ (0x1F900, "V"),
+ (0x1FA54, "X"),
+ (0x1FA60, "V"),
+ (0x1FA6E, "X"),
+ (0x1FA70, "V"),
+ (0x1FA7D, "X"),
+ (0x1FA80, "V"),
+ (0x1FA89, "X"),
+ ]
+
+
+def _seg_76() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x1FA90, "V"),
+ (0x1FABE, "X"),
+ (0x1FABF, "V"),
+ (0x1FAC6, "X"),
+ (0x1FACE, "V"),
+ (0x1FADC, "X"),
+ (0x1FAE0, "V"),
+ (0x1FAE9, "X"),
+ (0x1FAF0, "V"),
+ (0x1FAF9, "X"),
+ (0x1FB00, "V"),
+ (0x1FB93, "X"),
+ (0x1FB94, "V"),
+ (0x1FBCB, "X"),
+ (0x1FBF0, "M", "0"),
+ (0x1FBF1, "M", "1"),
+ (0x1FBF2, "M", "2"),
+ (0x1FBF3, "M", "3"),
+ (0x1FBF4, "M", "4"),
+ (0x1FBF5, "M", "5"),
+ (0x1FBF6, "M", "6"),
+ (0x1FBF7, "M", "7"),
+ (0x1FBF8, "M", "8"),
+ (0x1FBF9, "M", "9"),
+ (0x1FBFA, "X"),
+ (0x20000, "V"),
+ (0x2A6E0, "X"),
+ (0x2A700, "V"),
+ (0x2B73A, "X"),
+ (0x2B740, "V"),
+ (0x2B81E, "X"),
+ (0x2B820, "V"),
+ (0x2CEA2, "X"),
+ (0x2CEB0, "V"),
+ (0x2EBE1, "X"),
+ (0x2EBF0, "V"),
+ (0x2EE5E, "X"),
+ (0x2F800, "M", "丽"),
+ (0x2F801, "M", "丸"),
+ (0x2F802, "M", "乁"),
+ (0x2F803, "M", "𠄢"),
+ (0x2F804, "M", "你"),
+ (0x2F805, "M", "侮"),
+ (0x2F806, "M", "侻"),
+ (0x2F807, "M", "倂"),
+ (0x2F808, "M", "偺"),
+ (0x2F809, "M", "備"),
+ (0x2F80A, "M", "僧"),
+ (0x2F80B, "M", "像"),
+ (0x2F80C, "M", "㒞"),
+ (0x2F80D, "M", "𠘺"),
+ (0x2F80E, "M", "免"),
+ (0x2F80F, "M", "兔"),
+ (0x2F810, "M", "兤"),
+ (0x2F811, "M", "具"),
+ (0x2F812, "M", "𠔜"),
+ (0x2F813, "M", "㒹"),
+ (0x2F814, "M", "內"),
+ (0x2F815, "M", "再"),
+ (0x2F816, "M", "𠕋"),
+ (0x2F817, "M", "冗"),
+ (0x2F818, "M", "冤"),
+ (0x2F819, "M", "仌"),
+ (0x2F81A, "M", "冬"),
+ (0x2F81B, "M", "况"),
+ (0x2F81C, "M", "𩇟"),
+ (0x2F81D, "M", "凵"),
+ (0x2F81E, "M", "刃"),
+ (0x2F81F, "M", "㓟"),
+ (0x2F820, "M", "刻"),
+ (0x2F821, "M", "剆"),
+ (0x2F822, "M", "割"),
+ (0x2F823, "M", "剷"),
+ (0x2F824, "M", "㔕"),
+ (0x2F825, "M", "勇"),
+ (0x2F826, "M", "勉"),
+ (0x2F827, "M", "勤"),
+ (0x2F828, "M", "勺"),
+ (0x2F829, "M", "包"),
+ (0x2F82A, "M", "匆"),
+ (0x2F82B, "M", "北"),
+ (0x2F82C, "M", "卉"),
+ (0x2F82D, "M", "卑"),
+ (0x2F82E, "M", "博"),
+ (0x2F82F, "M", "即"),
+ (0x2F830, "M", "卽"),
+ (0x2F831, "M", "卿"),
+ (0x2F834, "M", "𠨬"),
+ (0x2F835, "M", "灰"),
+ (0x2F836, "M", "及"),
+ (0x2F837, "M", "叟"),
+ (0x2F838, "M", "𠭣"),
+ (0x2F839, "M", "叫"),
+ (0x2F83A, "M", "叱"),
+ (0x2F83B, "M", "吆"),
+ (0x2F83C, "M", "咞"),
+ (0x2F83D, "M", "吸"),
+ (0x2F83E, "M", "呈"),
+ (0x2F83F, "M", "周"),
+ (0x2F840, "M", "咢"),
+ ]
+
+
+def _seg_77() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F841, "M", "哶"),
+ (0x2F842, "M", "唐"),
+ (0x2F843, "M", "啓"),
+ (0x2F844, "M", "啣"),
+ (0x2F845, "M", "善"),
+ (0x2F847, "M", "喙"),
+ (0x2F848, "M", "喫"),
+ (0x2F849, "M", "喳"),
+ (0x2F84A, "M", "嗂"),
+ (0x2F84B, "M", "圖"),
+ (0x2F84C, "M", "嘆"),
+ (0x2F84D, "M", "圗"),
+ (0x2F84E, "M", "噑"),
+ (0x2F84F, "M", "噴"),
+ (0x2F850, "M", "切"),
+ (0x2F851, "M", "壮"),
+ (0x2F852, "M", "城"),
+ (0x2F853, "M", "埴"),
+ (0x2F854, "M", "堍"),
+ (0x2F855, "M", "型"),
+ (0x2F856, "M", "堲"),
+ (0x2F857, "M", "報"),
+ (0x2F858, "M", "墬"),
+ (0x2F859, "M", "𡓤"),
+ (0x2F85A, "M", "売"),
+ (0x2F85B, "M", "壷"),
+ (0x2F85C, "M", "夆"),
+ (0x2F85D, "M", "多"),
+ (0x2F85E, "M", "夢"),
+ (0x2F85F, "M", "奢"),
+ (0x2F860, "M", "𡚨"),
+ (0x2F861, "M", "𡛪"),
+ (0x2F862, "M", "姬"),
+ (0x2F863, "M", "娛"),
+ (0x2F864, "M", "娧"),
+ (0x2F865, "M", "姘"),
+ (0x2F866, "M", "婦"),
+ (0x2F867, "M", "㛮"),
+ (0x2F868, "X"),
+ (0x2F869, "M", "嬈"),
+ (0x2F86A, "M", "嬾"),
+ (0x2F86C, "M", "𡧈"),
+ (0x2F86D, "M", "寃"),
+ (0x2F86E, "M", "寘"),
+ (0x2F86F, "M", "寧"),
+ (0x2F870, "M", "寳"),
+ (0x2F871, "M", "𡬘"),
+ (0x2F872, "M", "寿"),
+ (0x2F873, "M", "将"),
+ (0x2F874, "X"),
+ (0x2F875, "M", "尢"),
+ (0x2F876, "M", "㞁"),
+ (0x2F877, "M", "屠"),
+ (0x2F878, "M", "屮"),
+ (0x2F879, "M", "峀"),
+ (0x2F87A, "M", "岍"),
+ (0x2F87B, "M", "𡷤"),
+ (0x2F87C, "M", "嵃"),
+ (0x2F87D, "M", "𡷦"),
+ (0x2F87E, "M", "嵮"),
+ (0x2F87F, "M", "嵫"),
+ (0x2F880, "M", "嵼"),
+ (0x2F881, "M", "巡"),
+ (0x2F882, "M", "巢"),
+ (0x2F883, "M", "㠯"),
+ (0x2F884, "M", "巽"),
+ (0x2F885, "M", "帨"),
+ (0x2F886, "M", "帽"),
+ (0x2F887, "M", "幩"),
+ (0x2F888, "M", "㡢"),
+ (0x2F889, "M", "𢆃"),
+ (0x2F88A, "M", "㡼"),
+ (0x2F88B, "M", "庰"),
+ (0x2F88C, "M", "庳"),
+ (0x2F88D, "M", "庶"),
+ (0x2F88E, "M", "廊"),
+ (0x2F88F, "M", "𪎒"),
+ (0x2F890, "M", "廾"),
+ (0x2F891, "M", "𢌱"),
+ (0x2F893, "M", "舁"),
+ (0x2F894, "M", "弢"),
+ (0x2F896, "M", "㣇"),
+ (0x2F897, "M", "𣊸"),
+ (0x2F898, "M", "𦇚"),
+ (0x2F899, "M", "形"),
+ (0x2F89A, "M", "彫"),
+ (0x2F89B, "M", "㣣"),
+ (0x2F89C, "M", "徚"),
+ (0x2F89D, "M", "忍"),
+ (0x2F89E, "M", "志"),
+ (0x2F89F, "M", "忹"),
+ (0x2F8A0, "M", "悁"),
+ (0x2F8A1, "M", "㤺"),
+ (0x2F8A2, "M", "㤜"),
+ (0x2F8A3, "M", "悔"),
+ (0x2F8A4, "M", "𢛔"),
+ (0x2F8A5, "M", "惇"),
+ (0x2F8A6, "M", "慈"),
+ (0x2F8A7, "M", "慌"),
+ (0x2F8A8, "M", "慎"),
+ ]
+
+
+def _seg_78() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F8A9, "M", "慌"),
+ (0x2F8AA, "M", "慺"),
+ (0x2F8AB, "M", "憎"),
+ (0x2F8AC, "M", "憲"),
+ (0x2F8AD, "M", "憤"),
+ (0x2F8AE, "M", "憯"),
+ (0x2F8AF, "M", "懞"),
+ (0x2F8B0, "M", "懲"),
+ (0x2F8B1, "M", "懶"),
+ (0x2F8B2, "M", "成"),
+ (0x2F8B3, "M", "戛"),
+ (0x2F8B4, "M", "扝"),
+ (0x2F8B5, "M", "抱"),
+ (0x2F8B6, "M", "拔"),
+ (0x2F8B7, "M", "捐"),
+ (0x2F8B8, "M", "𢬌"),
+ (0x2F8B9, "M", "挽"),
+ (0x2F8BA, "M", "拼"),
+ (0x2F8BB, "M", "捨"),
+ (0x2F8BC, "M", "掃"),
+ (0x2F8BD, "M", "揤"),
+ (0x2F8BE, "M", "𢯱"),
+ (0x2F8BF, "M", "搢"),
+ (0x2F8C0, "M", "揅"),
+ (0x2F8C1, "M", "掩"),
+ (0x2F8C2, "M", "㨮"),
+ (0x2F8C3, "M", "摩"),
+ (0x2F8C4, "M", "摾"),
+ (0x2F8C5, "M", "撝"),
+ (0x2F8C6, "M", "摷"),
+ (0x2F8C7, "M", "㩬"),
+ (0x2F8C8, "M", "敏"),
+ (0x2F8C9, "M", "敬"),
+ (0x2F8CA, "M", "𣀊"),
+ (0x2F8CB, "M", "旣"),
+ (0x2F8CC, "M", "書"),
+ (0x2F8CD, "M", "晉"),
+ (0x2F8CE, "M", "㬙"),
+ (0x2F8CF, "M", "暑"),
+ (0x2F8D0, "M", "㬈"),
+ (0x2F8D1, "M", "㫤"),
+ (0x2F8D2, "M", "冒"),
+ (0x2F8D3, "M", "冕"),
+ (0x2F8D4, "M", "最"),
+ (0x2F8D5, "M", "暜"),
+ (0x2F8D6, "M", "肭"),
+ (0x2F8D7, "M", "䏙"),
+ (0x2F8D8, "M", "朗"),
+ (0x2F8D9, "M", "望"),
+ (0x2F8DA, "M", "朡"),
+ (0x2F8DB, "M", "杞"),
+ (0x2F8DC, "M", "杓"),
+ (0x2F8DD, "M", "𣏃"),
+ (0x2F8DE, "M", "㭉"),
+ (0x2F8DF, "M", "柺"),
+ (0x2F8E0, "M", "枅"),
+ (0x2F8E1, "M", "桒"),
+ (0x2F8E2, "M", "梅"),
+ (0x2F8E3, "M", "𣑭"),
+ (0x2F8E4, "M", "梎"),
+ (0x2F8E5, "M", "栟"),
+ (0x2F8E6, "M", "椔"),
+ (0x2F8E7, "M", "㮝"),
+ (0x2F8E8, "M", "楂"),
+ (0x2F8E9, "M", "榣"),
+ (0x2F8EA, "M", "槪"),
+ (0x2F8EB, "M", "檨"),
+ (0x2F8EC, "M", "𣚣"),
+ (0x2F8ED, "M", "櫛"),
+ (0x2F8EE, "M", "㰘"),
+ (0x2F8EF, "M", "次"),
+ (0x2F8F0, "M", "𣢧"),
+ (0x2F8F1, "M", "歔"),
+ (0x2F8F2, "M", "㱎"),
+ (0x2F8F3, "M", "歲"),
+ (0x2F8F4, "M", "殟"),
+ (0x2F8F5, "M", "殺"),
+ (0x2F8F6, "M", "殻"),
+ (0x2F8F7, "M", "𣪍"),
+ (0x2F8F8, "M", "𡴋"),
+ (0x2F8F9, "M", "𣫺"),
+ (0x2F8FA, "M", "汎"),
+ (0x2F8FB, "M", "𣲼"),
+ (0x2F8FC, "M", "沿"),
+ (0x2F8FD, "M", "泍"),
+ (0x2F8FE, "M", "汧"),
+ (0x2F8FF, "M", "洖"),
+ (0x2F900, "M", "派"),
+ (0x2F901, "M", "海"),
+ (0x2F902, "M", "流"),
+ (0x2F903, "M", "浩"),
+ (0x2F904, "M", "浸"),
+ (0x2F905, "M", "涅"),
+ (0x2F906, "M", "𣴞"),
+ (0x2F907, "M", "洴"),
+ (0x2F908, "M", "港"),
+ (0x2F909, "M", "湮"),
+ (0x2F90A, "M", "㴳"),
+ (0x2F90B, "M", "滋"),
+ (0x2F90C, "M", "滇"),
+ ]
+
+
+def _seg_79() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F90D, "M", "𣻑"),
+ (0x2F90E, "M", "淹"),
+ (0x2F90F, "M", "潮"),
+ (0x2F910, "M", "𣽞"),
+ (0x2F911, "M", "𣾎"),
+ (0x2F912, "M", "濆"),
+ (0x2F913, "M", "瀹"),
+ (0x2F914, "M", "瀞"),
+ (0x2F915, "M", "瀛"),
+ (0x2F916, "M", "㶖"),
+ (0x2F917, "M", "灊"),
+ (0x2F918, "M", "災"),
+ (0x2F919, "M", "灷"),
+ (0x2F91A, "M", "炭"),
+ (0x2F91B, "M", "𠔥"),
+ (0x2F91C, "M", "煅"),
+ (0x2F91D, "M", "𤉣"),
+ (0x2F91E, "M", "熜"),
+ (0x2F91F, "X"),
+ (0x2F920, "M", "爨"),
+ (0x2F921, "M", "爵"),
+ (0x2F922, "M", "牐"),
+ (0x2F923, "M", "𤘈"),
+ (0x2F924, "M", "犀"),
+ (0x2F925, "M", "犕"),
+ (0x2F926, "M", "𤜵"),
+ (0x2F927, "M", "𤠔"),
+ (0x2F928, "M", "獺"),
+ (0x2F929, "M", "王"),
+ (0x2F92A, "M", "㺬"),
+ (0x2F92B, "M", "玥"),
+ (0x2F92C, "M", "㺸"),
+ (0x2F92E, "M", "瑇"),
+ (0x2F92F, "M", "瑜"),
+ (0x2F930, "M", "瑱"),
+ (0x2F931, "M", "璅"),
+ (0x2F932, "M", "瓊"),
+ (0x2F933, "M", "㼛"),
+ (0x2F934, "M", "甤"),
+ (0x2F935, "M", "𤰶"),
+ (0x2F936, "M", "甾"),
+ (0x2F937, "M", "𤲒"),
+ (0x2F938, "M", "異"),
+ (0x2F939, "M", "𢆟"),
+ (0x2F93A, "M", "瘐"),
+ (0x2F93B, "M", "𤾡"),
+ (0x2F93C, "M", "𤾸"),
+ (0x2F93D, "M", "𥁄"),
+ (0x2F93E, "M", "㿼"),
+ (0x2F93F, "M", "䀈"),
+ (0x2F940, "M", "直"),
+ (0x2F941, "M", "𥃳"),
+ (0x2F942, "M", "𥃲"),
+ (0x2F943, "M", "𥄙"),
+ (0x2F944, "M", "𥄳"),
+ (0x2F945, "M", "眞"),
+ (0x2F946, "M", "真"),
+ (0x2F948, "M", "睊"),
+ (0x2F949, "M", "䀹"),
+ (0x2F94A, "M", "瞋"),
+ (0x2F94B, "M", "䁆"),
+ (0x2F94C, "M", "䂖"),
+ (0x2F94D, "M", "𥐝"),
+ (0x2F94E, "M", "硎"),
+ (0x2F94F, "M", "碌"),
+ (0x2F950, "M", "磌"),
+ (0x2F951, "M", "䃣"),
+ (0x2F952, "M", "𥘦"),
+ (0x2F953, "M", "祖"),
+ (0x2F954, "M", "𥚚"),
+ (0x2F955, "M", "𥛅"),
+ (0x2F956, "M", "福"),
+ (0x2F957, "M", "秫"),
+ (0x2F958, "M", "䄯"),
+ (0x2F959, "M", "穀"),
+ (0x2F95A, "M", "穊"),
+ (0x2F95B, "M", "穏"),
+ (0x2F95C, "M", "𥥼"),
+ (0x2F95D, "M", "𥪧"),
+ (0x2F95F, "X"),
+ (0x2F960, "M", "䈂"),
+ (0x2F961, "M", "𥮫"),
+ (0x2F962, "M", "篆"),
+ (0x2F963, "M", "築"),
+ (0x2F964, "M", "䈧"),
+ (0x2F965, "M", "𥲀"),
+ (0x2F966, "M", "糒"),
+ (0x2F967, "M", "䊠"),
+ (0x2F968, "M", "糨"),
+ (0x2F969, "M", "糣"),
+ (0x2F96A, "M", "紀"),
+ (0x2F96B, "M", "𥾆"),
+ (0x2F96C, "M", "絣"),
+ (0x2F96D, "M", "䌁"),
+ (0x2F96E, "M", "緇"),
+ (0x2F96F, "M", "縂"),
+ (0x2F970, "M", "繅"),
+ (0x2F971, "M", "䌴"),
+ (0x2F972, "M", "𦈨"),
+ (0x2F973, "M", "𦉇"),
+ ]
+
+
+def _seg_80() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F974, "M", "䍙"),
+ (0x2F975, "M", "𦋙"),
+ (0x2F976, "M", "罺"),
+ (0x2F977, "M", "𦌾"),
+ (0x2F978, "M", "羕"),
+ (0x2F979, "M", "翺"),
+ (0x2F97A, "M", "者"),
+ (0x2F97B, "M", "𦓚"),
+ (0x2F97C, "M", "𦔣"),
+ (0x2F97D, "M", "聠"),
+ (0x2F97E, "M", "𦖨"),
+ (0x2F97F, "M", "聰"),
+ (0x2F980, "M", "𣍟"),
+ (0x2F981, "M", "䏕"),
+ (0x2F982, "M", "育"),
+ (0x2F983, "M", "脃"),
+ (0x2F984, "M", "䐋"),
+ (0x2F985, "M", "脾"),
+ (0x2F986, "M", "媵"),
+ (0x2F987, "M", "𦞧"),
+ (0x2F988, "M", "𦞵"),
+ (0x2F989, "M", "𣎓"),
+ (0x2F98A, "M", "𣎜"),
+ (0x2F98B, "M", "舁"),
+ (0x2F98C, "M", "舄"),
+ (0x2F98D, "M", "辞"),
+ (0x2F98E, "M", "䑫"),
+ (0x2F98F, "M", "芑"),
+ (0x2F990, "M", "芋"),
+ (0x2F991, "M", "芝"),
+ (0x2F992, "M", "劳"),
+ (0x2F993, "M", "花"),
+ (0x2F994, "M", "芳"),
+ (0x2F995, "M", "芽"),
+ (0x2F996, "M", "苦"),
+ (0x2F997, "M", "𦬼"),
+ (0x2F998, "M", "若"),
+ (0x2F999, "M", "茝"),
+ (0x2F99A, "M", "荣"),
+ (0x2F99B, "M", "莭"),
+ (0x2F99C, "M", "茣"),
+ (0x2F99D, "M", "莽"),
+ (0x2F99E, "M", "菧"),
+ (0x2F99F, "M", "著"),
+ (0x2F9A0, "M", "荓"),
+ (0x2F9A1, "M", "菊"),
+ (0x2F9A2, "M", "菌"),
+ (0x2F9A3, "M", "菜"),
+ (0x2F9A4, "M", "𦰶"),
+ (0x2F9A5, "M", "𦵫"),
+ (0x2F9A6, "M", "𦳕"),
+ (0x2F9A7, "M", "䔫"),
+ (0x2F9A8, "M", "蓱"),
+ (0x2F9A9, "M", "蓳"),
+ (0x2F9AA, "M", "蔖"),
+ (0x2F9AB, "M", "𧏊"),
+ (0x2F9AC, "M", "蕤"),
+ (0x2F9AD, "M", "𦼬"),
+ (0x2F9AE, "M", "䕝"),
+ (0x2F9AF, "M", "䕡"),
+ (0x2F9B0, "M", "𦾱"),
+ (0x2F9B1, "M", "𧃒"),
+ (0x2F9B2, "M", "䕫"),
+ (0x2F9B3, "M", "虐"),
+ (0x2F9B4, "M", "虜"),
+ (0x2F9B5, "M", "虧"),
+ (0x2F9B6, "M", "虩"),
+ (0x2F9B7, "M", "蚩"),
+ (0x2F9B8, "M", "蚈"),
+ (0x2F9B9, "M", "蜎"),
+ (0x2F9BA, "M", "蛢"),
+ (0x2F9BB, "M", "蝹"),
+ (0x2F9BC, "M", "蜨"),
+ (0x2F9BD, "M", "蝫"),
+ (0x2F9BE, "M", "螆"),
+ (0x2F9BF, "X"),
+ (0x2F9C0, "M", "蟡"),
+ (0x2F9C1, "M", "蠁"),
+ (0x2F9C2, "M", "䗹"),
+ (0x2F9C3, "M", "衠"),
+ (0x2F9C4, "M", "衣"),
+ (0x2F9C5, "M", "𧙧"),
+ (0x2F9C6, "M", "裗"),
+ (0x2F9C7, "M", "裞"),
+ (0x2F9C8, "M", "䘵"),
+ (0x2F9C9, "M", "裺"),
+ (0x2F9CA, "M", "㒻"),
+ (0x2F9CB, "M", "𧢮"),
+ (0x2F9CC, "M", "𧥦"),
+ (0x2F9CD, "M", "䚾"),
+ (0x2F9CE, "M", "䛇"),
+ (0x2F9CF, "M", "誠"),
+ (0x2F9D0, "M", "諭"),
+ (0x2F9D1, "M", "變"),
+ (0x2F9D2, "M", "豕"),
+ (0x2F9D3, "M", "𧲨"),
+ (0x2F9D4, "M", "貫"),
+ (0x2F9D5, "M", "賁"),
+ (0x2F9D6, "M", "贛"),
+ (0x2F9D7, "M", "起"),
+ ]
+
+
+def _seg_81() -> List[Union[Tuple[int, str], Tuple[int, str, str]]]:
+ return [
+ (0x2F9D8, "M", "𧼯"),
+ (0x2F9D9, "M", "𠠄"),
+ (0x2F9DA, "M", "跋"),
+ (0x2F9DB, "M", "趼"),
+ (0x2F9DC, "M", "跰"),
+ (0x2F9DD, "M", "𠣞"),
+ (0x2F9DE, "M", "軔"),
+ (0x2F9DF, "M", "輸"),
+ (0x2F9E0, "M", "𨗒"),
+ (0x2F9E1, "M", "𨗭"),
+ (0x2F9E2, "M", "邔"),
+ (0x2F9E3, "M", "郱"),
+ (0x2F9E4, "M", "鄑"),
+ (0x2F9E5, "M", "𨜮"),
+ (0x2F9E6, "M", "鄛"),
+ (0x2F9E7, "M", "鈸"),
+ (0x2F9E8, "M", "鋗"),
+ (0x2F9E9, "M", "鋘"),
+ (0x2F9EA, "M", "鉼"),
+ (0x2F9EB, "M", "鏹"),
+ (0x2F9EC, "M", "鐕"),
+ (0x2F9ED, "M", "𨯺"),
+ (0x2F9EE, "M", "開"),
+ (0x2F9EF, "M", "䦕"),
+ (0x2F9F0, "M", "閷"),
+ (0x2F9F1, "M", "𨵷"),
+ (0x2F9F2, "M", "䧦"),
+ (0x2F9F3, "M", "雃"),
+ (0x2F9F4, "M", "嶲"),
+ (0x2F9F5, "M", "霣"),
+ (0x2F9F6, "M", "𩅅"),
+ (0x2F9F7, "M", "𩈚"),
+ (0x2F9F8, "M", "䩮"),
+ (0x2F9F9, "M", "䩶"),
+ (0x2F9FA, "M", "韠"),
+ (0x2F9FB, "M", "𩐊"),
+ (0x2F9FC, "M", "䪲"),
+ (0x2F9FD, "M", "𩒖"),
+ (0x2F9FE, "M", "頋"),
+ (0x2FA00, "M", "頩"),
+ (0x2FA01, "M", "𩖶"),
+ (0x2FA02, "M", "飢"),
+ (0x2FA03, "M", "䬳"),
+ (0x2FA04, "M", "餩"),
+ (0x2FA05, "M", "馧"),
+ (0x2FA06, "M", "駂"),
+ (0x2FA07, "M", "駾"),
+ (0x2FA08, "M", "䯎"),
+ (0x2FA09, "M", "𩬰"),
+ (0x2FA0A, "M", "鬒"),
+ (0x2FA0B, "M", "鱀"),
+ (0x2FA0C, "M", "鳽"),
+ (0x2FA0D, "M", "䳎"),
+ (0x2FA0E, "M", "䳭"),
+ (0x2FA0F, "M", "鵧"),
+ (0x2FA10, "M", "𪃎"),
+ (0x2FA11, "M", "䳸"),
+ (0x2FA12, "M", "𪄅"),
+ (0x2FA13, "M", "𪈎"),
+ (0x2FA14, "M", "𪊑"),
+ (0x2FA15, "M", "麻"),
+ (0x2FA16, "M", "䵖"),
+ (0x2FA17, "M", "黹"),
+ (0x2FA18, "M", "黾"),
+ (0x2FA19, "M", "鼅"),
+ (0x2FA1A, "M", "鼏"),
+ (0x2FA1B, "M", "鼖"),
+ (0x2FA1C, "M", "鼻"),
+ (0x2FA1D, "M", "𪘀"),
+ (0x2FA1E, "X"),
+ (0x30000, "V"),
+ (0x3134B, "X"),
+ (0x31350, "V"),
+ (0x323B0, "X"),
+ (0xE0100, "I"),
+ (0xE01F0, "X"),
+ ]
+
+
+uts46data = tuple(
+ _seg_0()
+ + _seg_1()
+ + _seg_2()
+ + _seg_3()
+ + _seg_4()
+ + _seg_5()
+ + _seg_6()
+ + _seg_7()
+ + _seg_8()
+ + _seg_9()
+ + _seg_10()
+ + _seg_11()
+ + _seg_12()
+ + _seg_13()
+ + _seg_14()
+ + _seg_15()
+ + _seg_16()
+ + _seg_17()
+ + _seg_18()
+ + _seg_19()
+ + _seg_20()
+ + _seg_21()
+ + _seg_22()
+ + _seg_23()
+ + _seg_24()
+ + _seg_25()
+ + _seg_26()
+ + _seg_27()
+ + _seg_28()
+ + _seg_29()
+ + _seg_30()
+ + _seg_31()
+ + _seg_32()
+ + _seg_33()
+ + _seg_34()
+ + _seg_35()
+ + _seg_36()
+ + _seg_37()
+ + _seg_38()
+ + _seg_39()
+ + _seg_40()
+ + _seg_41()
+ + _seg_42()
+ + _seg_43()
+ + _seg_44()
+ + _seg_45()
+ + _seg_46()
+ + _seg_47()
+ + _seg_48()
+ + _seg_49()
+ + _seg_50()
+ + _seg_51()
+ + _seg_52()
+ + _seg_53()
+ + _seg_54()
+ + _seg_55()
+ + _seg_56()
+ + _seg_57()
+ + _seg_58()
+ + _seg_59()
+ + _seg_60()
+ + _seg_61()
+ + _seg_62()
+ + _seg_63()
+ + _seg_64()
+ + _seg_65()
+ + _seg_66()
+ + _seg_67()
+ + _seg_68()
+ + _seg_69()
+ + _seg_70()
+ + _seg_71()
+ + _seg_72()
+ + _seg_73()
+ + _seg_74()
+ + _seg_75()
+ + _seg_76()
+ + _seg_77()
+ + _seg_78()
+ + _seg_79()
+ + _seg_80()
+ + _seg_81()
+) # type: Tuple[Union[Tuple[int, str], Tuple[int, str, str]], ...]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER
new file mode 100644
index 0000000..a1b589e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt
new file mode 100644
index 0000000..7b190ca
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt
@@ -0,0 +1,28 @@
+Copyright 2011 Pallets
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/METADATA b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/METADATA
new file mode 100644
index 0000000..ddf5464
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/METADATA
@@ -0,0 +1,60 @@
+Metadata-Version: 2.1
+Name: itsdangerous
+Version: 2.2.0
+Summary: Safely pass data to untrusted environments and back.
+Maintainer-email: Pallets
+Requires-Python: >=3.8
+Description-Content-Type: text/markdown
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Typing :: Typed
+Project-URL: Changes, https://itsdangerous.palletsprojects.com/changes/
+Project-URL: Chat, https://discord.gg/pallets
+Project-URL: Documentation, https://itsdangerous.palletsprojects.com/
+Project-URL: Donate, https://palletsprojects.com/donate
+Project-URL: Source, https://github.com/pallets/itsdangerous/
+
+# ItsDangerous
+
+... so better sign this
+
+Various helpers to pass data to untrusted environments and to get it
+back safe and sound. Data is cryptographically signed to ensure that a
+token has not been tampered with.
+
+It's possible to customize how data is serialized. Data is compressed as
+needed. A timestamp can be added and verified automatically while
+loading a token.
+
+
+## A Simple Example
+
+Here's how you could generate a token for transmitting a user's id and
+name between web requests.
+
+```python
+from itsdangerous import URLSafeSerializer
+auth_s = URLSafeSerializer("secret key", "auth")
+token = auth_s.dumps({"id": 5, "name": "itsdangerous"})
+
+print(token)
+# eyJpZCI6NSwibmFtZSI6Iml0c2Rhbmdlcm91cyJ9.6YP6T0BaO67XP--9UzTrmurXSmg
+
+data = auth_s.loads(token)
+print(data["name"])
+# itsdangerous
+```
+
+
+## Donate
+
+The Pallets organization develops and supports ItsDangerous and other
+popular packages. In order to grow the community of contributors and
+users, and allow the maintainers to devote more time to the projects,
+[please donate today][].
+
+[please donate today]: https://palletsprojects.com/donate
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/RECORD
new file mode 100644
index 0000000..245f43e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/RECORD
@@ -0,0 +1,22 @@
+itsdangerous-2.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+itsdangerous-2.2.0.dist-info/LICENSE.txt,sha256=Y68JiRtr6K0aQlLtQ68PTvun_JSOIoNnvtfzxa4LCdc,1475
+itsdangerous-2.2.0.dist-info/METADATA,sha256=0rk0-1ZwihuU5DnwJVwPWoEI4yWOyCexih3JyZHblhE,1924
+itsdangerous-2.2.0.dist-info/RECORD,,
+itsdangerous-2.2.0.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81
+itsdangerous/__init__.py,sha256=4SK75sCe29xbRgQE1ZQtMHnKUuZYAf3bSpZOrff1IAY,1427
+itsdangerous/__pycache__/__init__.cpython-312.pyc,,
+itsdangerous/__pycache__/_json.cpython-312.pyc,,
+itsdangerous/__pycache__/encoding.cpython-312.pyc,,
+itsdangerous/__pycache__/exc.cpython-312.pyc,,
+itsdangerous/__pycache__/serializer.cpython-312.pyc,,
+itsdangerous/__pycache__/signer.cpython-312.pyc,,
+itsdangerous/__pycache__/timed.cpython-312.pyc,,
+itsdangerous/__pycache__/url_safe.cpython-312.pyc,,
+itsdangerous/_json.py,sha256=wPQGmge2yZ9328EHKF6gadGeyGYCJQKxtU-iLKE6UnA,473
+itsdangerous/encoding.py,sha256=wwTz5q_3zLcaAdunk6_vSoStwGqYWe307Zl_U87aRFM,1409
+itsdangerous/exc.py,sha256=Rr3exo0MRFEcPZltwecyK16VV1bE2K9_F1-d-ljcUn4,3201
+itsdangerous/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+itsdangerous/serializer.py,sha256=PmdwADLqkSyQLZ0jOKAgDsAW4k_H0TlA71Ei3z0C5aI,15601
+itsdangerous/signer.py,sha256=YO0CV7NBvHA6j549REHJFUjUojw2pHqwcUpQnU7yNYQ,9647
+itsdangerous/timed.py,sha256=6RvDMqNumGMxf0-HlpaZdN9PUQQmRvrQGplKhxuivUs,8083
+itsdangerous/url_safe.py,sha256=az4e5fXi_vs-YbWj8YZwn4wiVKfeD--GEKRT5Ueu4P4,2505
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/WHEEL
new file mode 100644
index 0000000..3b5e64b
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous-2.2.0.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: flit 3.9.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__init__.py
new file mode 100644
index 0000000..ea55256
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__init__.py
@@ -0,0 +1,38 @@
+from __future__ import annotations
+
+import typing as t
+
+from .encoding import base64_decode as base64_decode
+from .encoding import base64_encode as base64_encode
+from .encoding import want_bytes as want_bytes
+from .exc import BadData as BadData
+from .exc import BadHeader as BadHeader
+from .exc import BadPayload as BadPayload
+from .exc import BadSignature as BadSignature
+from .exc import BadTimeSignature as BadTimeSignature
+from .exc import SignatureExpired as SignatureExpired
+from .serializer import Serializer as Serializer
+from .signer import HMACAlgorithm as HMACAlgorithm
+from .signer import NoneAlgorithm as NoneAlgorithm
+from .signer import Signer as Signer
+from .timed import TimedSerializer as TimedSerializer
+from .timed import TimestampSigner as TimestampSigner
+from .url_safe import URLSafeSerializer as URLSafeSerializer
+from .url_safe import URLSafeTimedSerializer as URLSafeTimedSerializer
+
+
+def __getattr__(name: str) -> t.Any:
+ if name == "__version__":
+ import importlib.metadata
+ import warnings
+
+ warnings.warn(
+ "The '__version__' attribute is deprecated and will be removed in"
+ " ItsDangerous 2.3. Use feature detection or"
+ " 'importlib.metadata.version(\"itsdangerous\")' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return importlib.metadata.version("itsdangerous")
+
+ raise AttributeError(name)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..2b63a29
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/_json.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/_json.cpython-312.pyc
new file mode 100644
index 0000000..a83a603
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/_json.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/encoding.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/encoding.cpython-312.pyc
new file mode 100644
index 0000000..e0d3cfc
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/encoding.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/exc.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/exc.cpython-312.pyc
new file mode 100644
index 0000000..28e3795
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/exc.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/serializer.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/serializer.cpython-312.pyc
new file mode 100644
index 0000000..4677e8a
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/serializer.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/signer.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/signer.cpython-312.pyc
new file mode 100644
index 0000000..507fbed
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/signer.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/timed.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/timed.cpython-312.pyc
new file mode 100644
index 0000000..e912c9f
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/timed.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/url_safe.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/url_safe.cpython-312.pyc
new file mode 100644
index 0000000..6eb8e4d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/__pycache__/url_safe.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/_json.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/_json.py
new file mode 100644
index 0000000..fc23fea
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/_json.py
@@ -0,0 +1,18 @@
+from __future__ import annotations
+
+import json as _json
+import typing as t
+
+
+class _CompactJSON:
+ """Wrapper around json module that strips whitespace."""
+
+ @staticmethod
+ def loads(payload: str | bytes) -> t.Any:
+ return _json.loads(payload)
+
+ @staticmethod
+ def dumps(obj: t.Any, **kwargs: t.Any) -> str:
+ kwargs.setdefault("ensure_ascii", False)
+ kwargs.setdefault("separators", (",", ":"))
+ return _json.dumps(obj, **kwargs)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/encoding.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/encoding.py
new file mode 100644
index 0000000..f5ca80f
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/encoding.py
@@ -0,0 +1,54 @@
+from __future__ import annotations
+
+import base64
+import string
+import struct
+import typing as t
+
+from .exc import BadData
+
+
+def want_bytes(
+ s: str | bytes, encoding: str = "utf-8", errors: str = "strict"
+) -> bytes:
+ if isinstance(s, str):
+ s = s.encode(encoding, errors)
+
+ return s
+
+
+def base64_encode(string: str | bytes) -> bytes:
+ """Base64 encode a string of bytes or text. The resulting bytes are
+ safe to use in URLs.
+ """
+ string = want_bytes(string)
+ return base64.urlsafe_b64encode(string).rstrip(b"=")
+
+
+def base64_decode(string: str | bytes) -> bytes:
+ """Base64 decode a URL-safe string of bytes or text. The result is
+ bytes.
+ """
+ string = want_bytes(string, encoding="ascii", errors="ignore")
+ string += b"=" * (-len(string) % 4)
+
+ try:
+ return base64.urlsafe_b64decode(string)
+ except (TypeError, ValueError) as e:
+ raise BadData("Invalid base64-encoded data") from e
+
+
+# The alphabet used by base64.urlsafe_*
+_base64_alphabet = f"{string.ascii_letters}{string.digits}-_=".encode("ascii")
+
+_int64_struct = struct.Struct(">Q")
+_int_to_bytes = _int64_struct.pack
+_bytes_to_int = t.cast("t.Callable[[bytes], tuple[int]]", _int64_struct.unpack)
+
+
+def int_to_bytes(num: int) -> bytes:
+ return _int_to_bytes(num).lstrip(b"\x00")
+
+
+def bytes_to_int(bytestr: bytes) -> int:
+ return _bytes_to_int(bytestr.rjust(8, b"\x00"))[0]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/exc.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/exc.py
new file mode 100644
index 0000000..a75adcd
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/exc.py
@@ -0,0 +1,106 @@
+from __future__ import annotations
+
+import typing as t
+from datetime import datetime
+
+
+class BadData(Exception):
+ """Raised if bad data of any sort was encountered. This is the base
+ for all exceptions that ItsDangerous defines.
+
+ .. versionadded:: 0.15
+ """
+
+ def __init__(self, message: str):
+ super().__init__(message)
+ self.message = message
+
+ def __str__(self) -> str:
+ return self.message
+
+
+class BadSignature(BadData):
+ """Raised if a signature does not match."""
+
+ def __init__(self, message: str, payload: t.Any | None = None):
+ super().__init__(message)
+
+ #: The payload that failed the signature test. In some
+ #: situations you might still want to inspect this, even if
+ #: you know it was tampered with.
+ #:
+ #: .. versionadded:: 0.14
+ self.payload: t.Any | None = payload
+
+
+class BadTimeSignature(BadSignature):
+ """Raised if a time-based signature is invalid. This is a subclass
+ of :class:`BadSignature`.
+ """
+
+ def __init__(
+ self,
+ message: str,
+ payload: t.Any | None = None,
+ date_signed: datetime | None = None,
+ ):
+ super().__init__(message, payload)
+
+ #: If the signature expired this exposes the date of when the
+ #: signature was created. This can be helpful in order to
+ #: tell the user how long a link has been gone stale.
+ #:
+ #: .. versionchanged:: 2.0
+ #: The datetime value is timezone-aware rather than naive.
+ #:
+ #: .. versionadded:: 0.14
+ self.date_signed = date_signed
+
+
+class SignatureExpired(BadTimeSignature):
+ """Raised if a signature timestamp is older than ``max_age``. This
+ is a subclass of :exc:`BadTimeSignature`.
+ """
+
+
+class BadHeader(BadSignature):
+ """Raised if a signed header is invalid in some form. This only
+ happens for serializers that have a header that goes with the
+ signature.
+
+ .. versionadded:: 0.24
+ """
+
+ def __init__(
+ self,
+ message: str,
+ payload: t.Any | None = None,
+ header: t.Any | None = None,
+ original_error: Exception | None = None,
+ ):
+ super().__init__(message, payload)
+
+ #: If the header is actually available but just malformed it
+ #: might be stored here.
+ self.header: t.Any | None = header
+
+ #: If available, the error that indicates why the payload was
+ #: not valid. This might be ``None``.
+ self.original_error: Exception | None = original_error
+
+
+class BadPayload(BadData):
+ """Raised if a payload is invalid. This could happen if the payload
+ is loaded despite an invalid signature, or if there is a mismatch
+ between the serializer and deserializer. The original exception
+ that occurred during loading is stored on as :attr:`original_error`.
+
+ .. versionadded:: 0.15
+ """
+
+ def __init__(self, message: str, original_error: Exception | None = None):
+ super().__init__(message)
+
+ #: If available, the error that indicates why the payload was
+ #: not valid. This might be ``None``.
+ self.original_error: Exception | None = original_error
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/py.typed b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/serializer.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/serializer.py
new file mode 100644
index 0000000..5ddf387
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/serializer.py
@@ -0,0 +1,406 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import json
+import typing as t
+
+from .encoding import want_bytes
+from .exc import BadPayload
+from .exc import BadSignature
+from .signer import _make_keys_list
+from .signer import Signer
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ # This should be either be str or bytes. To avoid having to specify the
+ # bound type, it falls back to a union if structural matching fails.
+ _TSerialized = te.TypeVar(
+ "_TSerialized", bound=t.Union[str, bytes], default=t.Union[str, bytes]
+ )
+else:
+ # Still available at runtime on Python < 3.13, but without the default.
+ _TSerialized = t.TypeVar("_TSerialized", bound=t.Union[str, bytes])
+
+
+class _PDataSerializer(t.Protocol[_TSerialized]):
+ def loads(self, payload: _TSerialized, /) -> t.Any: ...
+ # A signature with additional arguments is not handled correctly by type
+ # checkers right now, so an overload is used below for serializers that
+ # don't match this strict protocol.
+ def dumps(self, obj: t.Any, /) -> _TSerialized: ...
+
+
+# Use TypeIs once it's available in typing_extensions or 3.13.
+def is_text_serializer(
+ serializer: _PDataSerializer[t.Any],
+) -> te.TypeGuard[_PDataSerializer[str]]:
+ """Checks whether a serializer generates text or binary."""
+ return isinstance(serializer.dumps({}), str)
+
+
+class Serializer(t.Generic[_TSerialized]):
+ """A serializer wraps a :class:`~itsdangerous.signer.Signer` to
+ enable serializing and securely signing data other than bytes. It
+ can unsign to verify that the data hasn't been changed.
+
+ The serializer provides :meth:`dumps` and :meth:`loads`, similar to
+ :mod:`json`, and by default uses :mod:`json` internally to serialize
+ the data to bytes.
+
+ The secret key should be a random string of ``bytes`` and should not
+ be saved to code or version control. Different salts should be used
+ to distinguish signing in different contexts. See :doc:`/concepts`
+ for information about the security of the secret key and salt.
+
+ :param secret_key: The secret key to sign and verify with. Can be a
+ list of keys, oldest to newest, to support key rotation.
+ :param salt: Extra key to combine with ``secret_key`` to distinguish
+ signatures in different contexts.
+ :param serializer: An object that provides ``dumps`` and ``loads``
+ methods for serializing data to a string. Defaults to
+ :attr:`default_serializer`, which defaults to :mod:`json`.
+ :param serializer_kwargs: Keyword arguments to pass when calling
+ ``serializer.dumps``.
+ :param signer: A ``Signer`` class to instantiate when signing data.
+ Defaults to :attr:`default_signer`, which defaults to
+ :class:`~itsdangerous.signer.Signer`.
+ :param signer_kwargs: Keyword arguments to pass when instantiating
+ the ``Signer`` class.
+ :param fallback_signers: List of signer parameters to try when
+ unsigning with the default signer fails. Each item can be a dict
+ of ``signer_kwargs``, a ``Signer`` class, or a tuple of
+ ``(signer, signer_kwargs)``. Defaults to
+ :attr:`default_fallback_signers`.
+
+ .. versionchanged:: 2.0
+ Added support for key rotation by passing a list to
+ ``secret_key``.
+
+ .. versionchanged:: 2.0
+ Removed the default SHA-512 fallback signer from
+ ``default_fallback_signers``.
+
+ .. versionchanged:: 1.1
+ Added support for ``fallback_signers`` and configured a default
+ SHA-512 fallback. This fallback is for users who used the yanked
+ 1.0.0 release which defaulted to SHA-512.
+
+ .. versionchanged:: 0.14
+ The ``signer`` and ``signer_kwargs`` parameters were added to
+ the constructor.
+ """
+
+ #: The default serialization module to use to serialize data to a
+ #: string internally. The default is :mod:`json`, but can be changed
+ #: to any object that provides ``dumps`` and ``loads`` methods.
+ default_serializer: _PDataSerializer[t.Any] = json
+
+ #: The default ``Signer`` class to instantiate when signing data.
+ #: The default is :class:`itsdangerous.signer.Signer`.
+ default_signer: type[Signer] = Signer
+
+ #: The default fallback signers to try when unsigning fails.
+ default_fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ] = []
+
+ # Serializer[str] if no data serializer is provided, or if it returns str.
+ @t.overload
+ def __init__(
+ self: Serializer[str],
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None = b"itsdangerous",
+ serializer: None | _PDataSerializer[str] = None,
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ): ...
+
+ # Serializer[bytes] with a bytes data serializer positional argument.
+ @t.overload
+ def __init__(
+ self: Serializer[bytes],
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None,
+ serializer: _PDataSerializer[bytes],
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ): ...
+
+ # Serializer[bytes] with a bytes data serializer keyword argument.
+ @t.overload
+ def __init__(
+ self: Serializer[bytes],
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None = b"itsdangerous",
+ *,
+ serializer: _PDataSerializer[bytes],
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ): ...
+
+ # Fall back with a positional argument. If the strict signature of
+ # _PDataSerializer doesn't match, fall back to a union, requiring the user
+ # to specify the type.
+ @t.overload
+ def __init__(
+ self,
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None,
+ serializer: t.Any,
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ): ...
+
+ # Fall back with a keyword argument.
+ @t.overload
+ def __init__(
+ self,
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None = b"itsdangerous",
+ *,
+ serializer: t.Any,
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ): ...
+
+ def __init__(
+ self,
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None = b"itsdangerous",
+ serializer: t.Any | None = None,
+ serializer_kwargs: dict[str, t.Any] | None = None,
+ signer: type[Signer] | None = None,
+ signer_kwargs: dict[str, t.Any] | None = None,
+ fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ]
+ | None = None,
+ ):
+ #: The list of secret keys to try for verifying signatures, from
+ #: oldest to newest. The newest (last) key is used for signing.
+ #:
+ #: This allows a key rotation system to keep a list of allowed
+ #: keys and remove expired ones.
+ self.secret_keys: list[bytes] = _make_keys_list(secret_key)
+
+ if salt is not None:
+ salt = want_bytes(salt)
+ # if salt is None then the signer's default is used
+
+ self.salt = salt
+
+ if serializer is None:
+ serializer = self.default_serializer
+
+ self.serializer: _PDataSerializer[_TSerialized] = serializer
+ self.is_text_serializer: bool = is_text_serializer(serializer)
+
+ if signer is None:
+ signer = self.default_signer
+
+ self.signer: type[Signer] = signer
+ self.signer_kwargs: dict[str, t.Any] = signer_kwargs or {}
+
+ if fallback_signers is None:
+ fallback_signers = list(self.default_fallback_signers)
+
+ self.fallback_signers: list[
+ dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer]
+ ] = fallback_signers
+ self.serializer_kwargs: dict[str, t.Any] = serializer_kwargs or {}
+
+ @property
+ def secret_key(self) -> bytes:
+ """The newest (last) entry in the :attr:`secret_keys` list. This
+ is for compatibility from before key rotation support was added.
+ """
+ return self.secret_keys[-1]
+
+ def load_payload(
+ self, payload: bytes, serializer: _PDataSerializer[t.Any] | None = None
+ ) -> t.Any:
+ """Loads the encoded object. This function raises
+ :class:`.BadPayload` if the payload is not valid. The
+ ``serializer`` parameter can be used to override the serializer
+ stored on the class. The encoded ``payload`` should always be
+ bytes.
+ """
+ if serializer is None:
+ use_serializer = self.serializer
+ is_text = self.is_text_serializer
+ else:
+ use_serializer = serializer
+ is_text = is_text_serializer(serializer)
+
+ try:
+ if is_text:
+ return use_serializer.loads(payload.decode("utf-8")) # type: ignore[arg-type]
+
+ return use_serializer.loads(payload) # type: ignore[arg-type]
+ except Exception as e:
+ raise BadPayload(
+ "Could not load the payload because an exception"
+ " occurred on unserializing the data.",
+ original_error=e,
+ ) from e
+
+ def dump_payload(self, obj: t.Any) -> bytes:
+ """Dumps the encoded object. The return value is always bytes.
+ If the internal serializer returns text, the value will be
+ encoded as UTF-8.
+ """
+ return want_bytes(self.serializer.dumps(obj, **self.serializer_kwargs))
+
+ def make_signer(self, salt: str | bytes | None = None) -> Signer:
+ """Creates a new instance of the signer to be used. The default
+ implementation uses the :class:`.Signer` base class.
+ """
+ if salt is None:
+ salt = self.salt
+
+ return self.signer(self.secret_keys, salt=salt, **self.signer_kwargs)
+
+ def iter_unsigners(self, salt: str | bytes | None = None) -> cabc.Iterator[Signer]:
+ """Iterates over all signers to be tried for unsigning. Starts
+ with the configured signer, then constructs each signer
+ specified in ``fallback_signers``.
+ """
+ if salt is None:
+ salt = self.salt
+
+ yield self.make_signer(salt)
+
+ for fallback in self.fallback_signers:
+ if isinstance(fallback, dict):
+ kwargs = fallback
+ fallback = self.signer
+ elif isinstance(fallback, tuple):
+ fallback, kwargs = fallback
+ else:
+ kwargs = self.signer_kwargs
+
+ for secret_key in self.secret_keys:
+ yield fallback(secret_key, salt=salt, **kwargs)
+
+ def dumps(self, obj: t.Any, salt: str | bytes | None = None) -> _TSerialized:
+ """Returns a signed string serialized with the internal
+ serializer. The return value can be either a byte or unicode
+ string depending on the format of the internal serializer.
+ """
+ payload = want_bytes(self.dump_payload(obj))
+ rv = self.make_signer(salt).sign(payload)
+
+ if self.is_text_serializer:
+ return rv.decode("utf-8") # type: ignore[return-value]
+
+ return rv # type: ignore[return-value]
+
+ def dump(self, obj: t.Any, f: t.IO[t.Any], salt: str | bytes | None = None) -> None:
+ """Like :meth:`dumps` but dumps into a file. The file handle has
+ to be compatible with what the internal serializer expects.
+ """
+ f.write(self.dumps(obj, salt))
+
+ def loads(
+ self, s: str | bytes, salt: str | bytes | None = None, **kwargs: t.Any
+ ) -> t.Any:
+ """Reverse of :meth:`dumps`. Raises :exc:`.BadSignature` if the
+ signature validation fails.
+ """
+ s = want_bytes(s)
+ last_exception = None
+
+ for signer in self.iter_unsigners(salt):
+ try:
+ return self.load_payload(signer.unsign(s))
+ except BadSignature as err:
+ last_exception = err
+
+ raise t.cast(BadSignature, last_exception)
+
+ def load(self, f: t.IO[t.Any], salt: str | bytes | None = None) -> t.Any:
+ """Like :meth:`loads` but loads from a file."""
+ return self.loads(f.read(), salt)
+
+ def loads_unsafe(
+ self, s: str | bytes, salt: str | bytes | None = None
+ ) -> tuple[bool, t.Any]:
+ """Like :meth:`loads` but without verifying the signature. This
+ is potentially very dangerous to use depending on how your
+ serializer works. The return value is ``(signature_valid,
+ payload)`` instead of just the payload. The first item will be a
+ boolean that indicates if the signature is valid. This function
+ never fails.
+
+ Use it for debugging only and if you know that your serializer
+ module is not exploitable (for example, do not use it with a
+ pickle serializer).
+
+ .. versionadded:: 0.15
+ """
+ return self._loads_unsafe_impl(s, salt)
+
+ def _loads_unsafe_impl(
+ self,
+ s: str | bytes,
+ salt: str | bytes | None,
+ load_kwargs: dict[str, t.Any] | None = None,
+ load_payload_kwargs: dict[str, t.Any] | None = None,
+ ) -> tuple[bool, t.Any]:
+ """Low level helper function to implement :meth:`loads_unsafe`
+ in serializer subclasses.
+ """
+ if load_kwargs is None:
+ load_kwargs = {}
+
+ try:
+ return True, self.loads(s, salt=salt, **load_kwargs)
+ except BadSignature as e:
+ if e.payload is None:
+ return False, None
+
+ if load_payload_kwargs is None:
+ load_payload_kwargs = {}
+
+ try:
+ return (
+ False,
+ self.load_payload(e.payload, **load_payload_kwargs),
+ )
+ except BadPayload:
+ return False, None
+
+ def load_unsafe(
+ self, f: t.IO[t.Any], salt: str | bytes | None = None
+ ) -> tuple[bool, t.Any]:
+ """Like :meth:`loads_unsafe` but loads from a file.
+
+ .. versionadded:: 0.15
+ """
+ return self.loads_unsafe(f.read(), salt=salt)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/signer.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/signer.py
new file mode 100644
index 0000000..e324dc0
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/signer.py
@@ -0,0 +1,266 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import hashlib
+import hmac
+import typing as t
+
+from .encoding import _base64_alphabet
+from .encoding import base64_decode
+from .encoding import base64_encode
+from .encoding import want_bytes
+from .exc import BadSignature
+
+
+class SigningAlgorithm:
+ """Subclasses must implement :meth:`get_signature` to provide
+ signature generation functionality.
+ """
+
+ def get_signature(self, key: bytes, value: bytes) -> bytes:
+ """Returns the signature for the given key and value."""
+ raise NotImplementedError()
+
+ def verify_signature(self, key: bytes, value: bytes, sig: bytes) -> bool:
+ """Verifies the given signature matches the expected
+ signature.
+ """
+ return hmac.compare_digest(sig, self.get_signature(key, value))
+
+
+class NoneAlgorithm(SigningAlgorithm):
+ """Provides an algorithm that does not perform any signing and
+ returns an empty signature.
+ """
+
+ def get_signature(self, key: bytes, value: bytes) -> bytes:
+ return b""
+
+
+def _lazy_sha1(string: bytes = b"") -> t.Any:
+ """Don't access ``hashlib.sha1`` until runtime. FIPS builds may not include
+ SHA-1, in which case the import and use as a default would fail before the
+ developer can configure something else.
+ """
+ return hashlib.sha1(string)
+
+
+class HMACAlgorithm(SigningAlgorithm):
+ """Provides signature generation using HMACs."""
+
+ #: The digest method to use with the MAC algorithm. This defaults to
+ #: SHA1, but can be changed to any other function in the hashlib
+ #: module.
+ default_digest_method: t.Any = staticmethod(_lazy_sha1)
+
+ def __init__(self, digest_method: t.Any = None):
+ if digest_method is None:
+ digest_method = self.default_digest_method
+
+ self.digest_method: t.Any = digest_method
+
+ def get_signature(self, key: bytes, value: bytes) -> bytes:
+ mac = hmac.new(key, msg=value, digestmod=self.digest_method)
+ return mac.digest()
+
+
+def _make_keys_list(
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+) -> list[bytes]:
+ if isinstance(secret_key, (str, bytes)):
+ return [want_bytes(secret_key)]
+
+ return [want_bytes(s) for s in secret_key] # pyright: ignore
+
+
+class Signer:
+ """A signer securely signs bytes, then unsigns them to verify that
+ the value hasn't been changed.
+
+ The secret key should be a random string of ``bytes`` and should not
+ be saved to code or version control. Different salts should be used
+ to distinguish signing in different contexts. See :doc:`/concepts`
+ for information about the security of the secret key and salt.
+
+ :param secret_key: The secret key to sign and verify with. Can be a
+ list of keys, oldest to newest, to support key rotation.
+ :param salt: Extra key to combine with ``secret_key`` to distinguish
+ signatures in different contexts.
+ :param sep: Separator between the signature and value.
+ :param key_derivation: How to derive the signing key from the secret
+ key and salt. Possible values are ``concat``, ``django-concat``,
+ or ``hmac``. Defaults to :attr:`default_key_derivation`, which
+ defaults to ``django-concat``.
+ :param digest_method: Hash function to use when generating the HMAC
+ signature. Defaults to :attr:`default_digest_method`, which
+ defaults to :func:`hashlib.sha1`. Note that the security of the
+ hash alone doesn't apply when used intermediately in HMAC.
+ :param algorithm: A :class:`SigningAlgorithm` instance to use
+ instead of building a default :class:`HMACAlgorithm` with the
+ ``digest_method``.
+
+ .. versionchanged:: 2.0
+ Added support for key rotation by passing a list to
+ ``secret_key``.
+
+ .. versionchanged:: 0.18
+ ``algorithm`` was added as an argument to the class constructor.
+
+ .. versionchanged:: 0.14
+ ``key_derivation`` and ``digest_method`` were added as arguments
+ to the class constructor.
+ """
+
+ #: The default digest method to use for the signer. The default is
+ #: :func:`hashlib.sha1`, but can be changed to any :mod:`hashlib` or
+ #: compatible object. Note that the security of the hash alone
+ #: doesn't apply when used intermediately in HMAC.
+ #:
+ #: .. versionadded:: 0.14
+ default_digest_method: t.Any = staticmethod(_lazy_sha1)
+
+ #: The default scheme to use to derive the signing key from the
+ #: secret key and salt. The default is ``django-concat``. Possible
+ #: values are ``concat``, ``django-concat``, and ``hmac``.
+ #:
+ #: .. versionadded:: 0.14
+ default_key_derivation: str = "django-concat"
+
+ def __init__(
+ self,
+ secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes],
+ salt: str | bytes | None = b"itsdangerous.Signer",
+ sep: str | bytes = b".",
+ key_derivation: str | None = None,
+ digest_method: t.Any | None = None,
+ algorithm: SigningAlgorithm | None = None,
+ ):
+ #: The list of secret keys to try for verifying signatures, from
+ #: oldest to newest. The newest (last) key is used for signing.
+ #:
+ #: This allows a key rotation system to keep a list of allowed
+ #: keys and remove expired ones.
+ self.secret_keys: list[bytes] = _make_keys_list(secret_key)
+ self.sep: bytes = want_bytes(sep)
+
+ if self.sep in _base64_alphabet:
+ raise ValueError(
+ "The given separator cannot be used because it may be"
+ " contained in the signature itself. ASCII letters,"
+ " digits, and '-_=' must not be used."
+ )
+
+ if salt is not None:
+ salt = want_bytes(salt)
+ else:
+ salt = b"itsdangerous.Signer"
+
+ self.salt = salt
+
+ if key_derivation is None:
+ key_derivation = self.default_key_derivation
+
+ self.key_derivation: str = key_derivation
+
+ if digest_method is None:
+ digest_method = self.default_digest_method
+
+ self.digest_method: t.Any = digest_method
+
+ if algorithm is None:
+ algorithm = HMACAlgorithm(self.digest_method)
+
+ self.algorithm: SigningAlgorithm = algorithm
+
+ @property
+ def secret_key(self) -> bytes:
+ """The newest (last) entry in the :attr:`secret_keys` list. This
+ is for compatibility from before key rotation support was added.
+ """
+ return self.secret_keys[-1]
+
+ def derive_key(self, secret_key: str | bytes | None = None) -> bytes:
+ """This method is called to derive the key. The default key
+ derivation choices can be overridden here. Key derivation is not
+ intended to be used as a security method to make a complex key
+ out of a short password. Instead you should use large random
+ secret keys.
+
+ :param secret_key: A specific secret key to derive from.
+ Defaults to the last item in :attr:`secret_keys`.
+
+ .. versionchanged:: 2.0
+ Added the ``secret_key`` parameter.
+ """
+ if secret_key is None:
+ secret_key = self.secret_keys[-1]
+ else:
+ secret_key = want_bytes(secret_key)
+
+ if self.key_derivation == "concat":
+ return t.cast(bytes, self.digest_method(self.salt + secret_key).digest())
+ elif self.key_derivation == "django-concat":
+ return t.cast(
+ bytes, self.digest_method(self.salt + b"signer" + secret_key).digest()
+ )
+ elif self.key_derivation == "hmac":
+ mac = hmac.new(secret_key, digestmod=self.digest_method)
+ mac.update(self.salt)
+ return mac.digest()
+ elif self.key_derivation == "none":
+ return secret_key
+ else:
+ raise TypeError("Unknown key derivation method")
+
+ def get_signature(self, value: str | bytes) -> bytes:
+ """Returns the signature for the given value."""
+ value = want_bytes(value)
+ key = self.derive_key()
+ sig = self.algorithm.get_signature(key, value)
+ return base64_encode(sig)
+
+ def sign(self, value: str | bytes) -> bytes:
+ """Signs the given string."""
+ value = want_bytes(value)
+ return value + self.sep + self.get_signature(value)
+
+ def verify_signature(self, value: str | bytes, sig: str | bytes) -> bool:
+ """Verifies the signature for the given value."""
+ try:
+ sig = base64_decode(sig)
+ except Exception:
+ return False
+
+ value = want_bytes(value)
+
+ for secret_key in reversed(self.secret_keys):
+ key = self.derive_key(secret_key)
+
+ if self.algorithm.verify_signature(key, value, sig):
+ return True
+
+ return False
+
+ def unsign(self, signed_value: str | bytes) -> bytes:
+ """Unsigns the given string."""
+ signed_value = want_bytes(signed_value)
+
+ if self.sep not in signed_value:
+ raise BadSignature(f"No {self.sep!r} found in value")
+
+ value, sig = signed_value.rsplit(self.sep, 1)
+
+ if self.verify_signature(value, sig):
+ return value
+
+ raise BadSignature(f"Signature {sig!r} does not match", payload=value)
+
+ def validate(self, signed_value: str | bytes) -> bool:
+ """Only validates the given signed value. Returns ``True`` if
+ the signature exists and is valid.
+ """
+ try:
+ self.unsign(signed_value)
+ return True
+ except BadSignature:
+ return False
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/timed.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/timed.py
new file mode 100644
index 0000000..7384375
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/timed.py
@@ -0,0 +1,228 @@
+from __future__ import annotations
+
+import collections.abc as cabc
+import time
+import typing as t
+from datetime import datetime
+from datetime import timezone
+
+from .encoding import base64_decode
+from .encoding import base64_encode
+from .encoding import bytes_to_int
+from .encoding import int_to_bytes
+from .encoding import want_bytes
+from .exc import BadSignature
+from .exc import BadTimeSignature
+from .exc import SignatureExpired
+from .serializer import _TSerialized
+from .serializer import Serializer
+from .signer import Signer
+
+
+class TimestampSigner(Signer):
+ """Works like the regular :class:`.Signer` but also records the time
+ of the signing and can be used to expire signatures. The
+ :meth:`unsign` method can raise :exc:`.SignatureExpired` if the
+ unsigning failed because the signature is expired.
+ """
+
+ def get_timestamp(self) -> int:
+ """Returns the current timestamp. The function must return an
+ integer.
+ """
+ return int(time.time())
+
+ def timestamp_to_datetime(self, ts: int) -> datetime:
+ """Convert the timestamp from :meth:`get_timestamp` into an
+ aware :class`datetime.datetime` in UTC.
+
+ .. versionchanged:: 2.0
+ The timestamp is returned as a timezone-aware ``datetime``
+ in UTC rather than a naive ``datetime`` assumed to be UTC.
+ """
+ return datetime.fromtimestamp(ts, tz=timezone.utc)
+
+ def sign(self, value: str | bytes) -> bytes:
+ """Signs the given string and also attaches time information."""
+ value = want_bytes(value)
+ timestamp = base64_encode(int_to_bytes(self.get_timestamp()))
+ sep = want_bytes(self.sep)
+ value = value + sep + timestamp
+ return value + sep + self.get_signature(value)
+
+ # Ignore overlapping signatures check, return_timestamp is the only
+ # parameter that affects the return type.
+
+ @t.overload
+ def unsign( # type: ignore[overload-overlap]
+ self,
+ signed_value: str | bytes,
+ max_age: int | None = None,
+ return_timestamp: t.Literal[False] = False,
+ ) -> bytes: ...
+
+ @t.overload
+ def unsign(
+ self,
+ signed_value: str | bytes,
+ max_age: int | None = None,
+ return_timestamp: t.Literal[True] = True,
+ ) -> tuple[bytes, datetime]: ...
+
+ def unsign(
+ self,
+ signed_value: str | bytes,
+ max_age: int | None = None,
+ return_timestamp: bool = False,
+ ) -> tuple[bytes, datetime] | bytes:
+ """Works like the regular :meth:`.Signer.unsign` but can also
+ validate the time. See the base docstring of the class for
+ the general behavior. If ``return_timestamp`` is ``True`` the
+ timestamp of the signature will be returned as an aware
+ :class:`datetime.datetime` object in UTC.
+
+ .. versionchanged:: 2.0
+ The timestamp is returned as a timezone-aware ``datetime``
+ in UTC rather than a naive ``datetime`` assumed to be UTC.
+ """
+ try:
+ result = super().unsign(signed_value)
+ sig_error = None
+ except BadSignature as e:
+ sig_error = e
+ result = e.payload or b""
+
+ sep = want_bytes(self.sep)
+
+ # If there is no timestamp in the result there is something
+ # seriously wrong. In case there was a signature error, we raise
+ # that one directly, otherwise we have a weird situation in
+ # which we shouldn't have come except someone uses a time-based
+ # serializer on non-timestamp data, so catch that.
+ if sep not in result:
+ if sig_error:
+ raise sig_error
+
+ raise BadTimeSignature("timestamp missing", payload=result)
+
+ value, ts_bytes = result.rsplit(sep, 1)
+ ts_int: int | None = None
+ ts_dt: datetime | None = None
+
+ try:
+ ts_int = bytes_to_int(base64_decode(ts_bytes))
+ except Exception:
+ pass
+
+ # Signature is *not* okay. Raise a proper error now that we have
+ # split the value and the timestamp.
+ if sig_error is not None:
+ if ts_int is not None:
+ try:
+ ts_dt = self.timestamp_to_datetime(ts_int)
+ except (ValueError, OSError, OverflowError) as exc:
+ # Windows raises OSError
+ # 32-bit raises OverflowError
+ raise BadTimeSignature(
+ "Malformed timestamp", payload=value
+ ) from exc
+
+ raise BadTimeSignature(str(sig_error), payload=value, date_signed=ts_dt)
+
+ # Signature was okay but the timestamp is actually not there or
+ # malformed. Should not happen, but we handle it anyway.
+ if ts_int is None:
+ raise BadTimeSignature("Malformed timestamp", payload=value)
+
+ # Check timestamp is not older than max_age
+ if max_age is not None:
+ age = self.get_timestamp() - ts_int
+
+ if age > max_age:
+ raise SignatureExpired(
+ f"Signature age {age} > {max_age} seconds",
+ payload=value,
+ date_signed=self.timestamp_to_datetime(ts_int),
+ )
+
+ if age < 0:
+ raise SignatureExpired(
+ f"Signature age {age} < 0 seconds",
+ payload=value,
+ date_signed=self.timestamp_to_datetime(ts_int),
+ )
+
+ if return_timestamp:
+ return value, self.timestamp_to_datetime(ts_int)
+
+ return value
+
+ def validate(self, signed_value: str | bytes, max_age: int | None = None) -> bool:
+ """Only validates the given signed value. Returns ``True`` if
+ the signature exists and is valid."""
+ try:
+ self.unsign(signed_value, max_age=max_age)
+ return True
+ except BadSignature:
+ return False
+
+
+class TimedSerializer(Serializer[_TSerialized]):
+ """Uses :class:`TimestampSigner` instead of the default
+ :class:`.Signer`.
+ """
+
+ default_signer: type[TimestampSigner] = TimestampSigner
+
+ def iter_unsigners(
+ self, salt: str | bytes | None = None
+ ) -> cabc.Iterator[TimestampSigner]:
+ return t.cast("cabc.Iterator[TimestampSigner]", super().iter_unsigners(salt))
+
+ # TODO: Signature is incompatible because parameters were added
+ # before salt.
+
+ def loads( # type: ignore[override]
+ self,
+ s: str | bytes,
+ max_age: int | None = None,
+ return_timestamp: bool = False,
+ salt: str | bytes | None = None,
+ ) -> t.Any:
+ """Reverse of :meth:`dumps`, raises :exc:`.BadSignature` if the
+ signature validation fails. If a ``max_age`` is provided it will
+ ensure the signature is not older than that time in seconds. In
+ case the signature is outdated, :exc:`.SignatureExpired` is
+ raised. All arguments are forwarded to the signer's
+ :meth:`~TimestampSigner.unsign` method.
+ """
+ s = want_bytes(s)
+ last_exception = None
+
+ for signer in self.iter_unsigners(salt):
+ try:
+ base64d, timestamp = signer.unsign(
+ s, max_age=max_age, return_timestamp=True
+ )
+ payload = self.load_payload(base64d)
+
+ if return_timestamp:
+ return payload, timestamp
+
+ return payload
+ except SignatureExpired:
+ # The signature was unsigned successfully but was
+ # expired. Do not try the next signer.
+ raise
+ except BadSignature as err:
+ last_exception = err
+
+ raise t.cast(BadSignature, last_exception)
+
+ def loads_unsafe( # type: ignore[override]
+ self,
+ s: str | bytes,
+ max_age: int | None = None,
+ salt: str | bytes | None = None,
+ ) -> tuple[bool, t.Any]:
+ return self._loads_unsafe_impl(s, salt, load_kwargs={"max_age": max_age})
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/url_safe.py b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/url_safe.py
new file mode 100644
index 0000000..56a0793
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/itsdangerous/url_safe.py
@@ -0,0 +1,83 @@
+from __future__ import annotations
+
+import typing as t
+import zlib
+
+from ._json import _CompactJSON
+from .encoding import base64_decode
+from .encoding import base64_encode
+from .exc import BadPayload
+from .serializer import _PDataSerializer
+from .serializer import Serializer
+from .timed import TimedSerializer
+
+
+class URLSafeSerializerMixin(Serializer[str]):
+ """Mixed in with a regular serializer it will attempt to zlib
+ compress the string to make it shorter if necessary. It will also
+ base64 encode the string so that it can safely be placed in a URL.
+ """
+
+ default_serializer: _PDataSerializer[str] = _CompactJSON
+
+ def load_payload(
+ self,
+ payload: bytes,
+ *args: t.Any,
+ serializer: t.Any | None = None,
+ **kwargs: t.Any,
+ ) -> t.Any:
+ decompress = False
+
+ if payload.startswith(b"."):
+ payload = payload[1:]
+ decompress = True
+
+ try:
+ json = base64_decode(payload)
+ except Exception as e:
+ raise BadPayload(
+ "Could not base64 decode the payload because of an exception",
+ original_error=e,
+ ) from e
+
+ if decompress:
+ try:
+ json = zlib.decompress(json)
+ except Exception as e:
+ raise BadPayload(
+ "Could not zlib decompress the payload before decoding the payload",
+ original_error=e,
+ ) from e
+
+ return super().load_payload(json, *args, **kwargs)
+
+ def dump_payload(self, obj: t.Any) -> bytes:
+ json = super().dump_payload(obj)
+ is_compressed = False
+ compressed = zlib.compress(json)
+
+ if len(compressed) < (len(json) - 1):
+ json = compressed
+ is_compressed = True
+
+ base64d = base64_encode(json)
+
+ if is_compressed:
+ base64d = b"." + base64d
+
+ return base64d
+
+
+class URLSafeSerializer(URLSafeSerializerMixin, Serializer[str]):
+ """Works like :class:`.Serializer` but dumps and loads into a URL
+ safe string consisting of the upper and lowercase character of the
+ alphabet as well as ``'_'``, ``'-'`` and ``'.'``.
+ """
+
+
+class URLSafeTimedSerializer(URLSafeSerializerMixin, TimedSerializer[str]):
+ """Works like :class:`.TimedSerializer` but dumps and loads into a
+ URL safe string consisting of the upper and lowercase character of
+ the alphabet as well as ``'_'``, ``'-'`` and ``'.'``.
+ """
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/INSTALLER b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/INSTALLER
new file mode 100644
index 0000000..a1b589e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/METADATA b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/METADATA
new file mode 100644
index 0000000..ffef2ff
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/METADATA
@@ -0,0 +1,84 @@
+Metadata-Version: 2.4
+Name: Jinja2
+Version: 3.1.6
+Summary: A very fast and expressive template engine.
+Maintainer-email: Pallets
+Requires-Python: >=3.7
+Description-Content-Type: text/markdown
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Environment :: Web Environment
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
+Classifier: Topic :: Text Processing :: Markup :: HTML
+Classifier: Typing :: Typed
+License-File: LICENSE.txt
+Requires-Dist: MarkupSafe>=2.0
+Requires-Dist: Babel>=2.7 ; extra == "i18n"
+Project-URL: Changes, https://jinja.palletsprojects.com/changes/
+Project-URL: Chat, https://discord.gg/pallets
+Project-URL: Documentation, https://jinja.palletsprojects.com/
+Project-URL: Donate, https://palletsprojects.com/donate
+Project-URL: Source, https://github.com/pallets/jinja/
+Provides-Extra: i18n
+
+# Jinja
+
+Jinja is a fast, expressive, extensible templating engine. Special
+placeholders in the template allow writing code similar to Python
+syntax. Then the template is passed data to render the final document.
+
+It includes:
+
+- Template inheritance and inclusion.
+- Define and import macros within templates.
+- HTML templates can use autoescaping to prevent XSS from untrusted
+ user input.
+- A sandboxed environment can safely render untrusted templates.
+- AsyncIO support for generating templates and calling async
+ functions.
+- I18N support with Babel.
+- Templates are compiled to optimized Python code just-in-time and
+ cached, or can be compiled ahead-of-time.
+- Exceptions point to the correct line in templates to make debugging
+ easier.
+- Extensible filters, tests, functions, and even syntax.
+
+Jinja's philosophy is that while application logic belongs in Python if
+possible, it shouldn't make the template designer's job difficult by
+restricting functionality too much.
+
+
+## In A Nutshell
+
+```jinja
+{% extends "base.html" %}
+{% block title %}Members{% endblock %}
+{% block content %}
+
+{% endblock %}
+```
+
+## Donate
+
+The Pallets organization develops and supports Jinja and other popular
+packages. In order to grow the community of contributors and users, and
+allow the maintainers to devote more time to the projects, [please
+donate today][].
+
+[please donate today]: https://palletsprojects.com/donate
+
+## Contributing
+
+See our [detailed contributing documentation][contrib] for many ways to
+contribute, including reporting issues, requesting features, asking or answering
+questions, and making PRs.
+
+[contrib]: https://palletsprojects.com/contributing/
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/RECORD b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/RECORD
new file mode 100644
index 0000000..ffa3866
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/RECORD
@@ -0,0 +1,57 @@
+jinja2-3.1.6.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+jinja2-3.1.6.dist-info/METADATA,sha256=aMVUj7Z8QTKhOJjZsx7FDGvqKr3ZFdkh8hQ1XDpkmcg,2871
+jinja2-3.1.6.dist-info/RECORD,,
+jinja2-3.1.6.dist-info/WHEEL,sha256=_2ozNFCLWc93bK4WKHCO-eDUENDlo-dgc9cU3qokYO4,82
+jinja2-3.1.6.dist-info/entry_points.txt,sha256=OL85gYU1eD8cuPlikifFngXpeBjaxl6rIJ8KkC_3r-I,58
+jinja2-3.1.6.dist-info/licenses/LICENSE.txt,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475
+jinja2/__init__.py,sha256=xxepO9i7DHsqkQrgBEduLtfoz2QCuT6_gbL4XSN1hbU,1928
+jinja2/__pycache__/__init__.cpython-312.pyc,,
+jinja2/__pycache__/_identifier.cpython-312.pyc,,
+jinja2/__pycache__/async_utils.cpython-312.pyc,,
+jinja2/__pycache__/bccache.cpython-312.pyc,,
+jinja2/__pycache__/compiler.cpython-312.pyc,,
+jinja2/__pycache__/constants.cpython-312.pyc,,
+jinja2/__pycache__/debug.cpython-312.pyc,,
+jinja2/__pycache__/defaults.cpython-312.pyc,,
+jinja2/__pycache__/environment.cpython-312.pyc,,
+jinja2/__pycache__/exceptions.cpython-312.pyc,,
+jinja2/__pycache__/ext.cpython-312.pyc,,
+jinja2/__pycache__/filters.cpython-312.pyc,,
+jinja2/__pycache__/idtracking.cpython-312.pyc,,
+jinja2/__pycache__/lexer.cpython-312.pyc,,
+jinja2/__pycache__/loaders.cpython-312.pyc,,
+jinja2/__pycache__/meta.cpython-312.pyc,,
+jinja2/__pycache__/nativetypes.cpython-312.pyc,,
+jinja2/__pycache__/nodes.cpython-312.pyc,,
+jinja2/__pycache__/optimizer.cpython-312.pyc,,
+jinja2/__pycache__/parser.cpython-312.pyc,,
+jinja2/__pycache__/runtime.cpython-312.pyc,,
+jinja2/__pycache__/sandbox.cpython-312.pyc,,
+jinja2/__pycache__/tests.cpython-312.pyc,,
+jinja2/__pycache__/utils.cpython-312.pyc,,
+jinja2/__pycache__/visitor.cpython-312.pyc,,
+jinja2/_identifier.py,sha256=_zYctNKzRqlk_murTNlzrju1FFJL7Va_Ijqqd7ii2lU,1958
+jinja2/async_utils.py,sha256=vK-PdsuorOMnWSnEkT3iUJRIkTnYgO2T6MnGxDgHI5o,2834
+jinja2/bccache.py,sha256=gh0qs9rulnXo0PhX5jTJy2UHzI8wFnQ63o_vw7nhzRg,14061
+jinja2/compiler.py,sha256=9RpCQl5X88BHllJiPsHPh295Hh0uApvwFJNQuutULeM,74131
+jinja2/constants.py,sha256=GMoFydBF_kdpaRKPoM5cl5MviquVRLVyZtfp5-16jg0,1433
+jinja2/debug.py,sha256=CnHqCDHd-BVGvti_8ZsTolnXNhA3ECsY-6n_2pwU8Hw,6297
+jinja2/defaults.py,sha256=boBcSw78h-lp20YbaXSJsqkAI2uN_mD_TtCydpeq5wU,1267
+jinja2/environment.py,sha256=9nhrP7Ch-NbGX00wvyr4yy-uhNHq2OCc60ggGrni_fk,61513
+jinja2/exceptions.py,sha256=ioHeHrWwCWNaXX1inHmHVblvc4haO7AXsjCp3GfWvx0,5071
+jinja2/ext.py,sha256=5PF5eHfh8mXAIxXHHRB2xXbXohi8pE3nHSOxa66uS7E,31875
+jinja2/filters.py,sha256=PQ_Egd9n9jSgtnGQYyF4K5j2nYwhUIulhPnyimkdr-k,55212
+jinja2/idtracking.py,sha256=-ll5lIp73pML3ErUYiIJj7tdmWxcH_IlDv3yA_hiZYo,10555
+jinja2/lexer.py,sha256=LYiYio6br-Tep9nPcupWXsPEtjluw3p1mU-lNBVRUfk,29786
+jinja2/loaders.py,sha256=wIrnxjvcbqh5VwW28NSkfotiDq8qNCxIOSFbGUiSLB4,24055
+jinja2/meta.py,sha256=OTDPkaFvU2Hgvx-6akz7154F8BIWaRmvJcBFvwopHww,4397
+jinja2/nativetypes.py,sha256=7GIGALVJgdyL80oZJdQUaUfwSt5q2lSSZbXt0dNf_M4,4210
+jinja2/nodes.py,sha256=m1Duzcr6qhZI8JQ6VyJgUNinjAf5bQzijSmDnMsvUx8,34579
+jinja2/optimizer.py,sha256=rJnCRlQ7pZsEEmMhsQDgC_pKyDHxP5TPS6zVPGsgcu8,1651
+jinja2/parser.py,sha256=lLOFy3sEmHc5IaEHRiH1sQVnId2moUQzhyeJZTtdY30,40383
+jinja2/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+jinja2/runtime.py,sha256=gDk-GvdriJXqgsGbHgrcKTP0Yp6zPXzhzrIpCFH3jAU,34249
+jinja2/sandbox.py,sha256=Mw2aitlY2I8la7FYhcX2YG9BtUYcLnD0Gh3d29cDWrY,15009
+jinja2/tests.py,sha256=VLsBhVFnWg-PxSBz1MhRnNWgP1ovXk3neO1FLQMeC9Q,5926
+jinja2/utils.py,sha256=rRp3o9e7ZKS4fyrWRbELyLcpuGVTFcnooaOa1qx_FIk,24129
+jinja2/visitor.py,sha256=EcnL1PIwf_4RVCOMxsRNuR8AXHbS1qfAdMOE2ngKJz4,3557
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/WHEEL b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/WHEEL
new file mode 100644
index 0000000..23d2d7e
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: flit 3.11.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/entry_points.txt b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/entry_points.txt
new file mode 100644
index 0000000..abc3eae
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/entry_points.txt
@@ -0,0 +1,3 @@
+[babel.extractors]
+jinja2=jinja2.ext:babel_extract[i18n]
+
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt
new file mode 100644
index 0000000..c37cae4
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt
@@ -0,0 +1,28 @@
+Copyright 2007 Pallets
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__init__.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__init__.py
new file mode 100644
index 0000000..1a423a3
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__init__.py
@@ -0,0 +1,38 @@
+"""Jinja is a template engine written in pure Python. It provides a
+non-XML syntax that supports inline expressions and an optional
+sandboxed environment.
+"""
+
+from .bccache import BytecodeCache as BytecodeCache
+from .bccache import FileSystemBytecodeCache as FileSystemBytecodeCache
+from .bccache import MemcachedBytecodeCache as MemcachedBytecodeCache
+from .environment import Environment as Environment
+from .environment import Template as Template
+from .exceptions import TemplateAssertionError as TemplateAssertionError
+from .exceptions import TemplateError as TemplateError
+from .exceptions import TemplateNotFound as TemplateNotFound
+from .exceptions import TemplateRuntimeError as TemplateRuntimeError
+from .exceptions import TemplatesNotFound as TemplatesNotFound
+from .exceptions import TemplateSyntaxError as TemplateSyntaxError
+from .exceptions import UndefinedError as UndefinedError
+from .loaders import BaseLoader as BaseLoader
+from .loaders import ChoiceLoader as ChoiceLoader
+from .loaders import DictLoader as DictLoader
+from .loaders import FileSystemLoader as FileSystemLoader
+from .loaders import FunctionLoader as FunctionLoader
+from .loaders import ModuleLoader as ModuleLoader
+from .loaders import PackageLoader as PackageLoader
+from .loaders import PrefixLoader as PrefixLoader
+from .runtime import ChainableUndefined as ChainableUndefined
+from .runtime import DebugUndefined as DebugUndefined
+from .runtime import make_logging_undefined as make_logging_undefined
+from .runtime import StrictUndefined as StrictUndefined
+from .runtime import Undefined as Undefined
+from .utils import clear_caches as clear_caches
+from .utils import is_undefined as is_undefined
+from .utils import pass_context as pass_context
+from .utils import pass_environment as pass_environment
+from .utils import pass_eval_context as pass_eval_context
+from .utils import select_autoescape as select_autoescape
+
+__version__ = "3.1.6"
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/__init__.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000..501111e
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/__init__.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/_identifier.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/_identifier.cpython-312.pyc
new file mode 100644
index 0000000..dfe47b2
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/_identifier.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/async_utils.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/async_utils.cpython-312.pyc
new file mode 100644
index 0000000..2234a4b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/async_utils.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/bccache.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/bccache.cpython-312.pyc
new file mode 100644
index 0000000..ada5084
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/bccache.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/compiler.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/compiler.cpython-312.pyc
new file mode 100644
index 0000000..ac31dd8
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/compiler.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/constants.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/constants.cpython-312.pyc
new file mode 100644
index 0000000..d1e32c8
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/constants.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/debug.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/debug.cpython-312.pyc
new file mode 100644
index 0000000..15c871f
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/debug.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/defaults.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/defaults.cpython-312.pyc
new file mode 100644
index 0000000..5a6d4fa
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/defaults.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/environment.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/environment.cpython-312.pyc
new file mode 100644
index 0000000..3cf9b9d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/environment.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/exceptions.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/exceptions.cpython-312.pyc
new file mode 100644
index 0000000..860d91d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/exceptions.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/ext.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/ext.cpython-312.pyc
new file mode 100644
index 0000000..28f477e
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/ext.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/filters.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/filters.cpython-312.pyc
new file mode 100644
index 0000000..c98a0ac
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/filters.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/idtracking.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/idtracking.cpython-312.pyc
new file mode 100644
index 0000000..6dfe069
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/idtracking.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/lexer.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/lexer.cpython-312.pyc
new file mode 100644
index 0000000..045dc15
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/lexer.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/loaders.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/loaders.cpython-312.pyc
new file mode 100644
index 0000000..7cc28ae
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/loaders.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/meta.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/meta.cpython-312.pyc
new file mode 100644
index 0000000..5c86b73
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/meta.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nativetypes.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nativetypes.cpython-312.pyc
new file mode 100644
index 0000000..400da05
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nativetypes.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nodes.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nodes.cpython-312.pyc
new file mode 100644
index 0000000..41a161b
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/nodes.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/optimizer.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/optimizer.cpython-312.pyc
new file mode 100644
index 0000000..b94d98f
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/optimizer.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/parser.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/parser.cpython-312.pyc
new file mode 100644
index 0000000..e6645a3
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/parser.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/runtime.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/runtime.cpython-312.pyc
new file mode 100644
index 0000000..d832edb
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/runtime.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/sandbox.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/sandbox.cpython-312.pyc
new file mode 100644
index 0000000..cf9718d
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/sandbox.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/tests.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/tests.cpython-312.pyc
new file mode 100644
index 0000000..32d8a7a
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/tests.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/utils.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/utils.cpython-312.pyc
new file mode 100644
index 0000000..ebc7ec6
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/utils.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/visitor.cpython-312.pyc b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/visitor.cpython-312.pyc
new file mode 100644
index 0000000..54b15c2
Binary files /dev/null and b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/__pycache__/visitor.cpython-312.pyc differ
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/_identifier.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/_identifier.py
new file mode 100644
index 0000000..928c150
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/_identifier.py
@@ -0,0 +1,6 @@
+import re
+
+# generated by scripts/generate_identifier_pattern.py
+pattern = re.compile(
+ r"[\w·̀-ͯ·҃-֑҇-ׇֽֿׁׂׅׄؐ-ًؚ-ٰٟۖ-ۜ۟-۪ۤۧۨ-ܑۭܰ-݊ަ-ް߫-߽߳ࠖ-࠙ࠛ-ࠣࠥ-ࠧࠩ-࡙࠭-࡛࣓-ࣣ࣡-ःऺ-़ा-ॏ॑-ॗॢॣঁ-ঃ়া-ৄেৈো-্ৗৢৣ৾ਁ-ਃ਼ਾ-ੂੇੈੋ-੍ੑੰੱੵઁ-ઃ઼ા-ૅે-ૉો-્ૢૣૺ-૿ଁ-ଃ଼ା-ୄେୈୋ-୍ୖୗୢୣஂா-ூெ-ைொ-்ௗఀ-ఄా-ౄె-ైొ-్ౕౖౢౣಁ-ಃ಼ಾ-ೄೆ-ೈೊ-್ೕೖೢೣഀ-ഃ഻഼ാ-ൄെ-ൈൊ-്ൗൢൣංඃ්ා-ුූෘ-ෟෲෳัิ-ฺ็-๎ັິ-ູົຼ່-ໍ༹༘༙༵༷༾༿ཱ-྄྆྇ྍ-ྗྙ-ྼ࿆ါ-ှၖ-ၙၞ-ၠၢ-ၤၧ-ၭၱ-ၴႂ-ႍႏႚ-ႝ፝-፟ᜒ-᜔ᜲ-᜴ᝒᝓᝲᝳ឴-៓៝᠋-᠍ᢅᢆᢩᤠ-ᤫᤰ-᤻ᨗ-ᨛᩕ-ᩞ᩠-᩿᩼᪰-᪽ᬀ-ᬄ᬴-᭄᭫-᭳ᮀ-ᮂᮡ-ᮭ᯦-᯳ᰤ-᰷᳐-᳔᳒-᳨᳭ᳲ-᳴᳷-᳹᷀-᷹᷻-᷿‿⁀⁔⃐-⃥⃜⃡-⃰℘℮⳯-⵿⳱ⷠ-〪ⷿ-゙゚〯꙯ꙴ-꙽ꚞꚟ꛰꛱ꠂ꠆ꠋꠣ-ꠧꢀꢁꢴ-ꣅ꣠-꣱ꣿꤦ-꤭ꥇ-꥓ꦀ-ꦃ꦳-꧀ꧥꨩ-ꨶꩃꩌꩍꩻ-ꩽꪰꪲ-ꪴꪷꪸꪾ꪿꫁ꫫ-ꫯꫵ꫶ꯣ-ꯪ꯬꯭ﬞ︀-️︠-︯︳︴﹍-﹏_𐇽𐋠𐍶-𐍺𐨁-𐨃𐨅𐨆𐨌-𐨏𐨸-𐨿𐨺𐫦𐫥𐴤-𐽆𐴧-𐽐𑀀-𑀂𑀸-𑁆𑁿-𑂂𑂰-𑂺𑄀-𑄂𑄧-𑄴𑅅𑅆𑅳𑆀-𑆂𑆳-𑇀𑇉-𑇌𑈬-𑈷𑈾𑋟-𑋪𑌀-𑌃𑌻𑌼𑌾-𑍄𑍇𑍈𑍋-𑍍𑍗𑍢𑍣𑍦-𑍬𑍰-𑍴𑐵-𑑆𑑞𑒰-𑓃𑖯-𑖵𑖸-𑗀𑗜𑗝𑘰-𑙀𑚫-𑚷𑜝-𑜫𑠬-𑠺𑨁-𑨊𑨳-𑨹𑨻-𑨾𑩇𑩑-𑩛𑪊-𑪙𑰯-𑰶𑰸-𑰿𑲒-𑲧𑲩-𑲶𑴱-𑴶𑴺𑴼𑴽𑴿-𑵅𑵇𑶊-𑶎𑶐𑶑𑶓-𑶗𑻳-𑻶𖫰-𖫴𖬰-𖬶𖽑-𖽾𖾏-𖾒𛲝𛲞𝅥-𝅩𝅭-𝅲𝅻-𝆂𝆅-𝆋𝆪-𝆭𝉂-𝉄𝨀-𝨶𝨻-𝩬𝩵𝪄𝪛-𝪟𝪡-𝪯𞀀-𞀆𞀈-𞀘𞀛-𞀡𞀣𞀤𞀦-𞣐𞀪-𞣖𞥄-𞥊󠄀-󠇯]+" # noqa: B950
+)
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/async_utils.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/async_utils.py
new file mode 100644
index 0000000..f0c1402
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/async_utils.py
@@ -0,0 +1,99 @@
+import inspect
+import typing as t
+from functools import WRAPPER_ASSIGNMENTS
+from functools import wraps
+
+from .utils import _PassArg
+from .utils import pass_eval_context
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+V = t.TypeVar("V")
+
+
+def async_variant(normal_func): # type: ignore
+ def decorator(async_func): # type: ignore
+ pass_arg = _PassArg.from_obj(normal_func)
+ need_eval_context = pass_arg is None
+
+ if pass_arg is _PassArg.environment:
+
+ def is_async(args: t.Any) -> bool:
+ return t.cast(bool, args[0].is_async)
+
+ else:
+
+ def is_async(args: t.Any) -> bool:
+ return t.cast(bool, args[0].environment.is_async)
+
+ # Take the doc and annotations from the sync function, but the
+ # name from the async function. Pallets-Sphinx-Themes
+ # build_function_directive expects __wrapped__ to point to the
+ # sync function.
+ async_func_attrs = ("__module__", "__name__", "__qualname__")
+ normal_func_attrs = tuple(set(WRAPPER_ASSIGNMENTS).difference(async_func_attrs))
+
+ @wraps(normal_func, assigned=normal_func_attrs)
+ @wraps(async_func, assigned=async_func_attrs, updated=())
+ def wrapper(*args, **kwargs): # type: ignore
+ b = is_async(args)
+
+ if need_eval_context:
+ args = args[1:]
+
+ if b:
+ return async_func(*args, **kwargs)
+
+ return normal_func(*args, **kwargs)
+
+ if need_eval_context:
+ wrapper = pass_eval_context(wrapper)
+
+ wrapper.jinja_async_variant = True # type: ignore[attr-defined]
+ return wrapper
+
+ return decorator
+
+
+_common_primitives = {int, float, bool, str, list, dict, tuple, type(None)}
+
+
+async def auto_await(value: t.Union[t.Awaitable["V"], "V"]) -> "V":
+ # Avoid a costly call to isawaitable
+ if type(value) in _common_primitives:
+ return t.cast("V", value)
+
+ if inspect.isawaitable(value):
+ return await t.cast("t.Awaitable[V]", value)
+
+ return value
+
+
+class _IteratorToAsyncIterator(t.Generic[V]):
+ def __init__(self, iterator: "t.Iterator[V]"):
+ self._iterator = iterator
+
+ def __aiter__(self) -> "te.Self":
+ return self
+
+ async def __anext__(self) -> V:
+ try:
+ return next(self._iterator)
+ except StopIteration as e:
+ raise StopAsyncIteration(e.value) from e
+
+
+def auto_aiter(
+ iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+) -> "t.AsyncIterator[V]":
+ if hasattr(iterable, "__aiter__"):
+ return iterable.__aiter__()
+ else:
+ return _IteratorToAsyncIterator(iter(iterable))
+
+
+async def auto_to_list(
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+) -> t.List["V"]:
+ return [x async for x in auto_aiter(value)]
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/bccache.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/bccache.py
new file mode 100644
index 0000000..ada8b09
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/bccache.py
@@ -0,0 +1,408 @@
+"""The optional bytecode cache system. This is useful if you have very
+complex template situations and the compilation of all those templates
+slows down your application too much.
+
+Situations where this is useful are often forking web applications that
+are initialized on the first request.
+"""
+
+import errno
+import fnmatch
+import marshal
+import os
+import pickle
+import stat
+import sys
+import tempfile
+import typing as t
+from hashlib import sha1
+from io import BytesIO
+from types import CodeType
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .environment import Environment
+
+ class _MemcachedClient(te.Protocol):
+ def get(self, key: str) -> bytes: ...
+
+ def set(
+ self, key: str, value: bytes, timeout: t.Optional[int] = None
+ ) -> None: ...
+
+
+bc_version = 5
+# Magic bytes to identify Jinja bytecode cache files. Contains the
+# Python major and minor version to avoid loading incompatible bytecode
+# if a project upgrades its Python version.
+bc_magic = (
+ b"j2"
+ + pickle.dumps(bc_version, 2)
+ + pickle.dumps((sys.version_info[0] << 24) | sys.version_info[1], 2)
+)
+
+
+class Bucket:
+ """Buckets are used to store the bytecode for one template. It's created
+ and initialized by the bytecode cache and passed to the loading functions.
+
+ The buckets get an internal checksum from the cache assigned and use this
+ to automatically reject outdated cache material. Individual bytecode
+ cache subclasses don't have to care about cache invalidation.
+ """
+
+ def __init__(self, environment: "Environment", key: str, checksum: str) -> None:
+ self.environment = environment
+ self.key = key
+ self.checksum = checksum
+ self.reset()
+
+ def reset(self) -> None:
+ """Resets the bucket (unloads the bytecode)."""
+ self.code: t.Optional[CodeType] = None
+
+ def load_bytecode(self, f: t.BinaryIO) -> None:
+ """Loads bytecode from a file or file like object."""
+ # make sure the magic header is correct
+ magic = f.read(len(bc_magic))
+ if magic != bc_magic:
+ self.reset()
+ return
+ # the source code of the file changed, we need to reload
+ checksum = pickle.load(f)
+ if self.checksum != checksum:
+ self.reset()
+ return
+ # if marshal_load fails then we need to reload
+ try:
+ self.code = marshal.load(f)
+ except (EOFError, ValueError, TypeError):
+ self.reset()
+ return
+
+ def write_bytecode(self, f: t.IO[bytes]) -> None:
+ """Dump the bytecode into the file or file like object passed."""
+ if self.code is None:
+ raise TypeError("can't write empty bucket")
+ f.write(bc_magic)
+ pickle.dump(self.checksum, f, 2)
+ marshal.dump(self.code, f)
+
+ def bytecode_from_string(self, string: bytes) -> None:
+ """Load bytecode from bytes."""
+ self.load_bytecode(BytesIO(string))
+
+ def bytecode_to_string(self) -> bytes:
+ """Return the bytecode as bytes."""
+ out = BytesIO()
+ self.write_bytecode(out)
+ return out.getvalue()
+
+
+class BytecodeCache:
+ """To implement your own bytecode cache you have to subclass this class
+ and override :meth:`load_bytecode` and :meth:`dump_bytecode`. Both of
+ these methods are passed a :class:`~jinja2.bccache.Bucket`.
+
+ A very basic bytecode cache that saves the bytecode on the file system::
+
+ from os import path
+
+ class MyCache(BytecodeCache):
+
+ def __init__(self, directory):
+ self.directory = directory
+
+ def load_bytecode(self, bucket):
+ filename = path.join(self.directory, bucket.key)
+ if path.exists(filename):
+ with open(filename, 'rb') as f:
+ bucket.load_bytecode(f)
+
+ def dump_bytecode(self, bucket):
+ filename = path.join(self.directory, bucket.key)
+ with open(filename, 'wb') as f:
+ bucket.write_bytecode(f)
+
+ A more advanced version of a filesystem based bytecode cache is part of
+ Jinja.
+ """
+
+ def load_bytecode(self, bucket: Bucket) -> None:
+ """Subclasses have to override this method to load bytecode into a
+ bucket. If they are not able to find code in the cache for the
+ bucket, it must not do anything.
+ """
+ raise NotImplementedError()
+
+ def dump_bytecode(self, bucket: Bucket) -> None:
+ """Subclasses have to override this method to write the bytecode
+ from a bucket back to the cache. If it unable to do so it must not
+ fail silently but raise an exception.
+ """
+ raise NotImplementedError()
+
+ def clear(self) -> None:
+ """Clears the cache. This method is not used by Jinja but should be
+ implemented to allow applications to clear the bytecode cache used
+ by a particular environment.
+ """
+
+ def get_cache_key(
+ self, name: str, filename: t.Optional[t.Union[str]] = None
+ ) -> str:
+ """Returns the unique hash key for this template name."""
+ hash = sha1(name.encode("utf-8"))
+
+ if filename is not None:
+ hash.update(f"|{filename}".encode())
+
+ return hash.hexdigest()
+
+ def get_source_checksum(self, source: str) -> str:
+ """Returns a checksum for the source."""
+ return sha1(source.encode("utf-8")).hexdigest()
+
+ def get_bucket(
+ self,
+ environment: "Environment",
+ name: str,
+ filename: t.Optional[str],
+ source: str,
+ ) -> Bucket:
+ """Return a cache bucket for the given template. All arguments are
+ mandatory but filename may be `None`.
+ """
+ key = self.get_cache_key(name, filename)
+ checksum = self.get_source_checksum(source)
+ bucket = Bucket(environment, key, checksum)
+ self.load_bytecode(bucket)
+ return bucket
+
+ def set_bucket(self, bucket: Bucket) -> None:
+ """Put the bucket into the cache."""
+ self.dump_bytecode(bucket)
+
+
+class FileSystemBytecodeCache(BytecodeCache):
+ """A bytecode cache that stores bytecode on the filesystem. It accepts
+ two arguments: The directory where the cache items are stored and a
+ pattern string that is used to build the filename.
+
+ If no directory is specified a default cache directory is selected. On
+ Windows the user's temp directory is used, on UNIX systems a directory
+ is created for the user in the system temp directory.
+
+ The pattern can be used to have multiple separate caches operate on the
+ same directory. The default pattern is ``'__jinja2_%s.cache'``. ``%s``
+ is replaced with the cache key.
+
+ >>> bcc = FileSystemBytecodeCache('/tmp/jinja_cache', '%s.cache')
+
+ This bytecode cache supports clearing of the cache using the clear method.
+ """
+
+ def __init__(
+ self, directory: t.Optional[str] = None, pattern: str = "__jinja2_%s.cache"
+ ) -> None:
+ if directory is None:
+ directory = self._get_default_cache_dir()
+ self.directory = directory
+ self.pattern = pattern
+
+ def _get_default_cache_dir(self) -> str:
+ def _unsafe_dir() -> "te.NoReturn":
+ raise RuntimeError(
+ "Cannot determine safe temp directory. You "
+ "need to explicitly provide one."
+ )
+
+ tmpdir = tempfile.gettempdir()
+
+ # On windows the temporary directory is used specific unless
+ # explicitly forced otherwise. We can just use that.
+ if os.name == "nt":
+ return tmpdir
+ if not hasattr(os, "getuid"):
+ _unsafe_dir()
+
+ dirname = f"_jinja2-cache-{os.getuid()}"
+ actual_dir = os.path.join(tmpdir, dirname)
+
+ try:
+ os.mkdir(actual_dir, stat.S_IRWXU)
+ except OSError as e:
+ if e.errno != errno.EEXIST:
+ raise
+ try:
+ os.chmod(actual_dir, stat.S_IRWXU)
+ actual_dir_stat = os.lstat(actual_dir)
+ if (
+ actual_dir_stat.st_uid != os.getuid()
+ or not stat.S_ISDIR(actual_dir_stat.st_mode)
+ or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
+ ):
+ _unsafe_dir()
+ except OSError as e:
+ if e.errno != errno.EEXIST:
+ raise
+
+ actual_dir_stat = os.lstat(actual_dir)
+ if (
+ actual_dir_stat.st_uid != os.getuid()
+ or not stat.S_ISDIR(actual_dir_stat.st_mode)
+ or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
+ ):
+ _unsafe_dir()
+
+ return actual_dir
+
+ def _get_cache_filename(self, bucket: Bucket) -> str:
+ return os.path.join(self.directory, self.pattern % (bucket.key,))
+
+ def load_bytecode(self, bucket: Bucket) -> None:
+ filename = self._get_cache_filename(bucket)
+
+ # Don't test for existence before opening the file, since the
+ # file could disappear after the test before the open.
+ try:
+ f = open(filename, "rb")
+ except (FileNotFoundError, IsADirectoryError, PermissionError):
+ # PermissionError can occur on Windows when an operation is
+ # in progress, such as calling clear().
+ return
+
+ with f:
+ bucket.load_bytecode(f)
+
+ def dump_bytecode(self, bucket: Bucket) -> None:
+ # Write to a temporary file, then rename to the real name after
+ # writing. This avoids another process reading the file before
+ # it is fully written.
+ name = self._get_cache_filename(bucket)
+ f = tempfile.NamedTemporaryFile(
+ mode="wb",
+ dir=os.path.dirname(name),
+ prefix=os.path.basename(name),
+ suffix=".tmp",
+ delete=False,
+ )
+
+ def remove_silent() -> None:
+ try:
+ os.remove(f.name)
+ except OSError:
+ # Another process may have called clear(). On Windows,
+ # another program may be holding the file open.
+ pass
+
+ try:
+ with f:
+ bucket.write_bytecode(f)
+ except BaseException:
+ remove_silent()
+ raise
+
+ try:
+ os.replace(f.name, name)
+ except OSError:
+ # Another process may have called clear(). On Windows,
+ # another program may be holding the file open.
+ remove_silent()
+ except BaseException:
+ remove_silent()
+ raise
+
+ def clear(self) -> None:
+ # imported lazily here because google app-engine doesn't support
+ # write access on the file system and the function does not exist
+ # normally.
+ from os import remove
+
+ files = fnmatch.filter(os.listdir(self.directory), self.pattern % ("*",))
+ for filename in files:
+ try:
+ remove(os.path.join(self.directory, filename))
+ except OSError:
+ pass
+
+
+class MemcachedBytecodeCache(BytecodeCache):
+ """This class implements a bytecode cache that uses a memcache cache for
+ storing the information. It does not enforce a specific memcache library
+ (tummy's memcache or cmemcache) but will accept any class that provides
+ the minimal interface required.
+
+ Libraries compatible with this class:
+
+ - `cachelib `_
+ - `python-memcached `_
+
+ (Unfortunately the django cache interface is not compatible because it
+ does not support storing binary data, only text. You can however pass
+ the underlying cache client to the bytecode cache which is available
+ as `django.core.cache.cache._client`.)
+
+ The minimal interface for the client passed to the constructor is this:
+
+ .. class:: MinimalClientInterface
+
+ .. method:: set(key, value[, timeout])
+
+ Stores the bytecode in the cache. `value` is a string and
+ `timeout` the timeout of the key. If timeout is not provided
+ a default timeout or no timeout should be assumed, if it's
+ provided it's an integer with the number of seconds the cache
+ item should exist.
+
+ .. method:: get(key)
+
+ Returns the value for the cache key. If the item does not
+ exist in the cache the return value must be `None`.
+
+ The other arguments to the constructor are the prefix for all keys that
+ is added before the actual cache key and the timeout for the bytecode in
+ the cache system. We recommend a high (or no) timeout.
+
+ This bytecode cache does not support clearing of used items in the cache.
+ The clear method is a no-operation function.
+
+ .. versionadded:: 2.7
+ Added support for ignoring memcache errors through the
+ `ignore_memcache_errors` parameter.
+ """
+
+ def __init__(
+ self,
+ client: "_MemcachedClient",
+ prefix: str = "jinja2/bytecode/",
+ timeout: t.Optional[int] = None,
+ ignore_memcache_errors: bool = True,
+ ):
+ self.client = client
+ self.prefix = prefix
+ self.timeout = timeout
+ self.ignore_memcache_errors = ignore_memcache_errors
+
+ def load_bytecode(self, bucket: Bucket) -> None:
+ try:
+ code = self.client.get(self.prefix + bucket.key)
+ except Exception:
+ if not self.ignore_memcache_errors:
+ raise
+ else:
+ bucket.bytecode_from_string(code)
+
+ def dump_bytecode(self, bucket: Bucket) -> None:
+ key = self.prefix + bucket.key
+ value = bucket.bytecode_to_string()
+
+ try:
+ if self.timeout is not None:
+ self.client.set(key, value, self.timeout)
+ else:
+ self.client.set(key, value)
+ except Exception:
+ if not self.ignore_memcache_errors:
+ raise
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/compiler.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/compiler.py
new file mode 100644
index 0000000..a4ff6a1
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/compiler.py
@@ -0,0 +1,1998 @@
+"""Compiles nodes from the parser into Python code."""
+
+import typing as t
+from contextlib import contextmanager
+from functools import update_wrapper
+from io import StringIO
+from itertools import chain
+from keyword import iskeyword as is_python_keyword
+
+from markupsafe import escape
+from markupsafe import Markup
+
+from . import nodes
+from .exceptions import TemplateAssertionError
+from .idtracking import Symbols
+from .idtracking import VAR_LOAD_ALIAS
+from .idtracking import VAR_LOAD_PARAMETER
+from .idtracking import VAR_LOAD_RESOLVE
+from .idtracking import VAR_LOAD_UNDEFINED
+from .nodes import EvalContext
+from .optimizer import Optimizer
+from .utils import _PassArg
+from .utils import concat
+from .visitor import NodeVisitor
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .environment import Environment
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+
+operators = {
+ "eq": "==",
+ "ne": "!=",
+ "gt": ">",
+ "gteq": ">=",
+ "lt": "<",
+ "lteq": "<=",
+ "in": "in",
+ "notin": "not in",
+}
+
+
+def optimizeconst(f: F) -> F:
+ def new_func(
+ self: "CodeGenerator", node: nodes.Expr, frame: "Frame", **kwargs: t.Any
+ ) -> t.Any:
+ # Only optimize if the frame is not volatile
+ if self.optimizer is not None and not frame.eval_ctx.volatile:
+ new_node = self.optimizer.visit(node, frame.eval_ctx)
+
+ if new_node != node:
+ return self.visit(new_node, frame)
+
+ return f(self, node, frame, **kwargs)
+
+ return update_wrapper(new_func, f) # type: ignore[return-value]
+
+
+def _make_binop(op: str) -> t.Callable[["CodeGenerator", nodes.BinExpr, "Frame"], None]:
+ @optimizeconst
+ def visitor(self: "CodeGenerator", node: nodes.BinExpr, frame: Frame) -> None:
+ if (
+ self.environment.sandboxed and op in self.environment.intercepted_binops # type: ignore
+ ):
+ self.write(f"environment.call_binop(context, {op!r}, ")
+ self.visit(node.left, frame)
+ self.write(", ")
+ self.visit(node.right, frame)
+ else:
+ self.write("(")
+ self.visit(node.left, frame)
+ self.write(f" {op} ")
+ self.visit(node.right, frame)
+
+ self.write(")")
+
+ return visitor
+
+
+def _make_unop(
+ op: str,
+) -> t.Callable[["CodeGenerator", nodes.UnaryExpr, "Frame"], None]:
+ @optimizeconst
+ def visitor(self: "CodeGenerator", node: nodes.UnaryExpr, frame: Frame) -> None:
+ if (
+ self.environment.sandboxed and op in self.environment.intercepted_unops # type: ignore
+ ):
+ self.write(f"environment.call_unop(context, {op!r}, ")
+ self.visit(node.node, frame)
+ else:
+ self.write("(" + op)
+ self.visit(node.node, frame)
+
+ self.write(")")
+
+ return visitor
+
+
+def generate(
+ node: nodes.Template,
+ environment: "Environment",
+ name: t.Optional[str],
+ filename: t.Optional[str],
+ stream: t.Optional[t.TextIO] = None,
+ defer_init: bool = False,
+ optimized: bool = True,
+) -> t.Optional[str]:
+ """Generate the python source for a node tree."""
+ if not isinstance(node, nodes.Template):
+ raise TypeError("Can't compile non template nodes")
+
+ generator = environment.code_generator_class(
+ environment, name, filename, stream, defer_init, optimized
+ )
+ generator.visit(node)
+
+ if stream is None:
+ return generator.stream.getvalue() # type: ignore
+
+ return None
+
+
+def has_safe_repr(value: t.Any) -> bool:
+ """Does the node have a safe representation?"""
+ if value is None or value is NotImplemented or value is Ellipsis:
+ return True
+
+ if type(value) in {bool, int, float, complex, range, str, Markup}:
+ return True
+
+ if type(value) in {tuple, list, set, frozenset}:
+ return all(has_safe_repr(v) for v in value)
+
+ if type(value) is dict: # noqa E721
+ return all(has_safe_repr(k) and has_safe_repr(v) for k, v in value.items())
+
+ return False
+
+
+def find_undeclared(
+ nodes: t.Iterable[nodes.Node], names: t.Iterable[str]
+) -> t.Set[str]:
+ """Check if the names passed are accessed undeclared. The return value
+ is a set of all the undeclared names from the sequence of names found.
+ """
+ visitor = UndeclaredNameVisitor(names)
+ try:
+ for node in nodes:
+ visitor.visit(node)
+ except VisitorExit:
+ pass
+ return visitor.undeclared
+
+
+class MacroRef:
+ def __init__(self, node: t.Union[nodes.Macro, nodes.CallBlock]) -> None:
+ self.node = node
+ self.accesses_caller = False
+ self.accesses_kwargs = False
+ self.accesses_varargs = False
+
+
+class Frame:
+ """Holds compile time information for us."""
+
+ def __init__(
+ self,
+ eval_ctx: EvalContext,
+ parent: t.Optional["Frame"] = None,
+ level: t.Optional[int] = None,
+ ) -> None:
+ self.eval_ctx = eval_ctx
+
+ # the parent of this frame
+ self.parent = parent
+
+ if parent is None:
+ self.symbols = Symbols(level=level)
+
+ # in some dynamic inheritance situations the compiler needs to add
+ # write tests around output statements.
+ self.require_output_check = False
+
+ # inside some tags we are using a buffer rather than yield statements.
+ # this for example affects {% filter %} or {% macro %}. If a frame
+ # is buffered this variable points to the name of the list used as
+ # buffer.
+ self.buffer: t.Optional[str] = None
+
+ # the name of the block we're in, otherwise None.
+ self.block: t.Optional[str] = None
+
+ else:
+ self.symbols = Symbols(parent.symbols, level=level)
+ self.require_output_check = parent.require_output_check
+ self.buffer = parent.buffer
+ self.block = parent.block
+
+ # a toplevel frame is the root + soft frames such as if conditions.
+ self.toplevel = False
+
+ # the root frame is basically just the outermost frame, so no if
+ # conditions. This information is used to optimize inheritance
+ # situations.
+ self.rootlevel = False
+
+ # variables set inside of loops and blocks should not affect outer frames,
+ # but they still needs to be kept track of as part of the active context.
+ self.loop_frame = False
+ self.block_frame = False
+
+ # track whether the frame is being used in an if-statement or conditional
+ # expression as it determines which errors should be raised during runtime
+ # or compile time.
+ self.soft_frame = False
+
+ def copy(self) -> "te.Self":
+ """Create a copy of the current one."""
+ rv = object.__new__(self.__class__)
+ rv.__dict__.update(self.__dict__)
+ rv.symbols = self.symbols.copy()
+ return rv
+
+ def inner(self, isolated: bool = False) -> "Frame":
+ """Return an inner frame."""
+ if isolated:
+ return Frame(self.eval_ctx, level=self.symbols.level + 1)
+ return Frame(self.eval_ctx, self)
+
+ def soft(self) -> "te.Self":
+ """Return a soft frame. A soft frame may not be modified as
+ standalone thing as it shares the resources with the frame it
+ was created of, but it's not a rootlevel frame any longer.
+
+ This is only used to implement if-statements and conditional
+ expressions.
+ """
+ rv = self.copy()
+ rv.rootlevel = False
+ rv.soft_frame = True
+ return rv
+
+ __copy__ = copy
+
+
+class VisitorExit(RuntimeError):
+ """Exception used by the `UndeclaredNameVisitor` to signal a stop."""
+
+
+class DependencyFinderVisitor(NodeVisitor):
+ """A visitor that collects filter and test calls."""
+
+ def __init__(self) -> None:
+ self.filters: t.Set[str] = set()
+ self.tests: t.Set[str] = set()
+
+ def visit_Filter(self, node: nodes.Filter) -> None:
+ self.generic_visit(node)
+ self.filters.add(node.name)
+
+ def visit_Test(self, node: nodes.Test) -> None:
+ self.generic_visit(node)
+ self.tests.add(node.name)
+
+ def visit_Block(self, node: nodes.Block) -> None:
+ """Stop visiting at blocks."""
+
+
+class UndeclaredNameVisitor(NodeVisitor):
+ """A visitor that checks if a name is accessed without being
+ declared. This is different from the frame visitor as it will
+ not stop at closure frames.
+ """
+
+ def __init__(self, names: t.Iterable[str]) -> None:
+ self.names = set(names)
+ self.undeclared: t.Set[str] = set()
+
+ def visit_Name(self, node: nodes.Name) -> None:
+ if node.ctx == "load" and node.name in self.names:
+ self.undeclared.add(node.name)
+ if self.undeclared == self.names:
+ raise VisitorExit()
+ else:
+ self.names.discard(node.name)
+
+ def visit_Block(self, node: nodes.Block) -> None:
+ """Stop visiting a blocks."""
+
+
+class CompilerExit(Exception):
+ """Raised if the compiler encountered a situation where it just
+ doesn't make sense to further process the code. Any block that
+ raises such an exception is not further processed.
+ """
+
+
+class CodeGenerator(NodeVisitor):
+ def __init__(
+ self,
+ environment: "Environment",
+ name: t.Optional[str],
+ filename: t.Optional[str],
+ stream: t.Optional[t.TextIO] = None,
+ defer_init: bool = False,
+ optimized: bool = True,
+ ) -> None:
+ if stream is None:
+ stream = StringIO()
+ self.environment = environment
+ self.name = name
+ self.filename = filename
+ self.stream = stream
+ self.created_block_context = False
+ self.defer_init = defer_init
+ self.optimizer: t.Optional[Optimizer] = None
+
+ if optimized:
+ self.optimizer = Optimizer(environment)
+
+ # aliases for imports
+ self.import_aliases: t.Dict[str, str] = {}
+
+ # a registry for all blocks. Because blocks are moved out
+ # into the global python scope they are registered here
+ self.blocks: t.Dict[str, nodes.Block] = {}
+
+ # the number of extends statements so far
+ self.extends_so_far = 0
+
+ # some templates have a rootlevel extends. In this case we
+ # can safely assume that we're a child template and do some
+ # more optimizations.
+ self.has_known_extends = False
+
+ # the current line number
+ self.code_lineno = 1
+
+ # registry of all filters and tests (global, not block local)
+ self.tests: t.Dict[str, str] = {}
+ self.filters: t.Dict[str, str] = {}
+
+ # the debug information
+ self.debug_info: t.List[t.Tuple[int, int]] = []
+ self._write_debug_info: t.Optional[int] = None
+
+ # the number of new lines before the next write()
+ self._new_lines = 0
+
+ # the line number of the last written statement
+ self._last_line = 0
+
+ # true if nothing was written so far.
+ self._first_write = True
+
+ # used by the `temporary_identifier` method to get new
+ # unique, temporary identifier
+ self._last_identifier = 0
+
+ # the current indentation
+ self._indentation = 0
+
+ # Tracks toplevel assignments
+ self._assign_stack: t.List[t.Set[str]] = []
+
+ # Tracks parameter definition blocks
+ self._param_def_block: t.List[t.Set[str]] = []
+
+ # Tracks the current context.
+ self._context_reference_stack = ["context"]
+
+ @property
+ def optimized(self) -> bool:
+ return self.optimizer is not None
+
+ # -- Various compilation helpers
+
+ def fail(self, msg: str, lineno: int) -> "te.NoReturn":
+ """Fail with a :exc:`TemplateAssertionError`."""
+ raise TemplateAssertionError(msg, lineno, self.name, self.filename)
+
+ def temporary_identifier(self) -> str:
+ """Get a new unique identifier."""
+ self._last_identifier += 1
+ return f"t_{self._last_identifier}"
+
+ def buffer(self, frame: Frame) -> None:
+ """Enable buffering for the frame from that point onwards."""
+ frame.buffer = self.temporary_identifier()
+ self.writeline(f"{frame.buffer} = []")
+
+ def return_buffer_contents(
+ self, frame: Frame, force_unescaped: bool = False
+ ) -> None:
+ """Return the buffer contents of the frame."""
+ if not force_unescaped:
+ if frame.eval_ctx.volatile:
+ self.writeline("if context.eval_ctx.autoescape:")
+ self.indent()
+ self.writeline(f"return Markup(concat({frame.buffer}))")
+ self.outdent()
+ self.writeline("else:")
+ self.indent()
+ self.writeline(f"return concat({frame.buffer})")
+ self.outdent()
+ return
+ elif frame.eval_ctx.autoescape:
+ self.writeline(f"return Markup(concat({frame.buffer}))")
+ return
+ self.writeline(f"return concat({frame.buffer})")
+
+ def indent(self) -> None:
+ """Indent by one."""
+ self._indentation += 1
+
+ def outdent(self, step: int = 1) -> None:
+ """Outdent by step."""
+ self._indentation -= step
+
+ def start_write(self, frame: Frame, node: t.Optional[nodes.Node] = None) -> None:
+ """Yield or write into the frame buffer."""
+ if frame.buffer is None:
+ self.writeline("yield ", node)
+ else:
+ self.writeline(f"{frame.buffer}.append(", node)
+
+ def end_write(self, frame: Frame) -> None:
+ """End the writing process started by `start_write`."""
+ if frame.buffer is not None:
+ self.write(")")
+
+ def simple_write(
+ self, s: str, frame: Frame, node: t.Optional[nodes.Node] = None
+ ) -> None:
+ """Simple shortcut for start_write + write + end_write."""
+ self.start_write(frame, node)
+ self.write(s)
+ self.end_write(frame)
+
+ def blockvisit(self, nodes: t.Iterable[nodes.Node], frame: Frame) -> None:
+ """Visit a list of nodes as block in a frame. If the current frame
+ is no buffer a dummy ``if 0: yield None`` is written automatically.
+ """
+ try:
+ self.writeline("pass")
+ for node in nodes:
+ self.visit(node, frame)
+ except CompilerExit:
+ pass
+
+ def write(self, x: str) -> None:
+ """Write a string into the output stream."""
+ if self._new_lines:
+ if not self._first_write:
+ self.stream.write("\n" * self._new_lines)
+ self.code_lineno += self._new_lines
+ if self._write_debug_info is not None:
+ self.debug_info.append((self._write_debug_info, self.code_lineno))
+ self._write_debug_info = None
+ self._first_write = False
+ self.stream.write(" " * self._indentation)
+ self._new_lines = 0
+ self.stream.write(x)
+
+ def writeline(
+ self, x: str, node: t.Optional[nodes.Node] = None, extra: int = 0
+ ) -> None:
+ """Combination of newline and write."""
+ self.newline(node, extra)
+ self.write(x)
+
+ def newline(self, node: t.Optional[nodes.Node] = None, extra: int = 0) -> None:
+ """Add one or more newlines before the next write."""
+ self._new_lines = max(self._new_lines, 1 + extra)
+ if node is not None and node.lineno != self._last_line:
+ self._write_debug_info = node.lineno
+ self._last_line = node.lineno
+
+ def signature(
+ self,
+ node: t.Union[nodes.Call, nodes.Filter, nodes.Test],
+ frame: Frame,
+ extra_kwargs: t.Optional[t.Mapping[str, t.Any]] = None,
+ ) -> None:
+ """Writes a function call to the stream for the current node.
+ A leading comma is added automatically. The extra keyword
+ arguments may not include python keywords otherwise a syntax
+ error could occur. The extra keyword arguments should be given
+ as python dict.
+ """
+ # if any of the given keyword arguments is a python keyword
+ # we have to make sure that no invalid call is created.
+ kwarg_workaround = any(
+ is_python_keyword(t.cast(str, k))
+ for k in chain((x.key for x in node.kwargs), extra_kwargs or ())
+ )
+
+ for arg in node.args:
+ self.write(", ")
+ self.visit(arg, frame)
+
+ if not kwarg_workaround:
+ for kwarg in node.kwargs:
+ self.write(", ")
+ self.visit(kwarg, frame)
+ if extra_kwargs is not None:
+ for key, value in extra_kwargs.items():
+ self.write(f", {key}={value}")
+ if node.dyn_args:
+ self.write(", *")
+ self.visit(node.dyn_args, frame)
+
+ if kwarg_workaround:
+ if node.dyn_kwargs is not None:
+ self.write(", **dict({")
+ else:
+ self.write(", **{")
+ for kwarg in node.kwargs:
+ self.write(f"{kwarg.key!r}: ")
+ self.visit(kwarg.value, frame)
+ self.write(", ")
+ if extra_kwargs is not None:
+ for key, value in extra_kwargs.items():
+ self.write(f"{key!r}: {value}, ")
+ if node.dyn_kwargs is not None:
+ self.write("}, **")
+ self.visit(node.dyn_kwargs, frame)
+ self.write(")")
+ else:
+ self.write("}")
+
+ elif node.dyn_kwargs is not None:
+ self.write(", **")
+ self.visit(node.dyn_kwargs, frame)
+
+ def pull_dependencies(self, nodes: t.Iterable[nodes.Node]) -> None:
+ """Find all filter and test names used in the template and
+ assign them to variables in the compiled namespace. Checking
+ that the names are registered with the environment is done when
+ compiling the Filter and Test nodes. If the node is in an If or
+ CondExpr node, the check is done at runtime instead.
+
+ .. versionchanged:: 3.0
+ Filters and tests in If and CondExpr nodes are checked at
+ runtime instead of compile time.
+ """
+ visitor = DependencyFinderVisitor()
+
+ for node in nodes:
+ visitor.visit(node)
+
+ for id_map, names, dependency in (
+ (self.filters, visitor.filters, "filters"),
+ (
+ self.tests,
+ visitor.tests,
+ "tests",
+ ),
+ ):
+ for name in sorted(names):
+ if name not in id_map:
+ id_map[name] = self.temporary_identifier()
+
+ # add check during runtime that dependencies used inside of executed
+ # blocks are defined, as this step may be skipped during compile time
+ self.writeline("try:")
+ self.indent()
+ self.writeline(f"{id_map[name]} = environment.{dependency}[{name!r}]")
+ self.outdent()
+ self.writeline("except KeyError:")
+ self.indent()
+ self.writeline("@internalcode")
+ self.writeline(f"def {id_map[name]}(*unused):")
+ self.indent()
+ self.writeline(
+ f'raise TemplateRuntimeError("No {dependency[:-1]}'
+ f' named {name!r} found.")'
+ )
+ self.outdent()
+ self.outdent()
+
+ def enter_frame(self, frame: Frame) -> None:
+ undefs = []
+ for target, (action, param) in frame.symbols.loads.items():
+ if action == VAR_LOAD_PARAMETER:
+ pass
+ elif action == VAR_LOAD_RESOLVE:
+ self.writeline(f"{target} = {self.get_resolve_func()}({param!r})")
+ elif action == VAR_LOAD_ALIAS:
+ self.writeline(f"{target} = {param}")
+ elif action == VAR_LOAD_UNDEFINED:
+ undefs.append(target)
+ else:
+ raise NotImplementedError("unknown load instruction")
+ if undefs:
+ self.writeline(f"{' = '.join(undefs)} = missing")
+
+ def leave_frame(self, frame: Frame, with_python_scope: bool = False) -> None:
+ if not with_python_scope:
+ undefs = []
+ for target in frame.symbols.loads:
+ undefs.append(target)
+ if undefs:
+ self.writeline(f"{' = '.join(undefs)} = missing")
+
+ def choose_async(self, async_value: str = "async ", sync_value: str = "") -> str:
+ return async_value if self.environment.is_async else sync_value
+
+ def func(self, name: str) -> str:
+ return f"{self.choose_async()}def {name}"
+
+ def macro_body(
+ self, node: t.Union[nodes.Macro, nodes.CallBlock], frame: Frame
+ ) -> t.Tuple[Frame, MacroRef]:
+ """Dump the function def of a macro or call block."""
+ frame = frame.inner()
+ frame.symbols.analyze_node(node)
+ macro_ref = MacroRef(node)
+
+ explicit_caller = None
+ skip_special_params = set()
+ args = []
+
+ for idx, arg in enumerate(node.args):
+ if arg.name == "caller":
+ explicit_caller = idx
+ if arg.name in ("kwargs", "varargs"):
+ skip_special_params.add(arg.name)
+ args.append(frame.symbols.ref(arg.name))
+
+ undeclared = find_undeclared(node.body, ("caller", "kwargs", "varargs"))
+
+ if "caller" in undeclared:
+ # In older Jinja versions there was a bug that allowed caller
+ # to retain the special behavior even if it was mentioned in
+ # the argument list. However thankfully this was only really
+ # working if it was the last argument. So we are explicitly
+ # checking this now and error out if it is anywhere else in
+ # the argument list.
+ if explicit_caller is not None:
+ try:
+ node.defaults[explicit_caller - len(node.args)]
+ except IndexError:
+ self.fail(
+ "When defining macros or call blocks the "
+ 'special "caller" argument must be omitted '
+ "or be given a default.",
+ node.lineno,
+ )
+ else:
+ args.append(frame.symbols.declare_parameter("caller"))
+ macro_ref.accesses_caller = True
+ if "kwargs" in undeclared and "kwargs" not in skip_special_params:
+ args.append(frame.symbols.declare_parameter("kwargs"))
+ macro_ref.accesses_kwargs = True
+ if "varargs" in undeclared and "varargs" not in skip_special_params:
+ args.append(frame.symbols.declare_parameter("varargs"))
+ macro_ref.accesses_varargs = True
+
+ # macros are delayed, they never require output checks
+ frame.require_output_check = False
+ frame.symbols.analyze_node(node)
+ self.writeline(f"{self.func('macro')}({', '.join(args)}):", node)
+ self.indent()
+
+ self.buffer(frame)
+ self.enter_frame(frame)
+
+ self.push_parameter_definitions(frame)
+ for idx, arg in enumerate(node.args):
+ ref = frame.symbols.ref(arg.name)
+ self.writeline(f"if {ref} is missing:")
+ self.indent()
+ try:
+ default = node.defaults[idx - len(node.args)]
+ except IndexError:
+ self.writeline(
+ f'{ref} = undefined("parameter {arg.name!r} was not provided",'
+ f" name={arg.name!r})"
+ )
+ else:
+ self.writeline(f"{ref} = ")
+ self.visit(default, frame)
+ self.mark_parameter_stored(ref)
+ self.outdent()
+ self.pop_parameter_definitions()
+
+ self.blockvisit(node.body, frame)
+ self.return_buffer_contents(frame, force_unescaped=True)
+ self.leave_frame(frame, with_python_scope=True)
+ self.outdent()
+
+ return frame, macro_ref
+
+ def macro_def(self, macro_ref: MacroRef, frame: Frame) -> None:
+ """Dump the macro definition for the def created by macro_body."""
+ arg_tuple = ", ".join(repr(x.name) for x in macro_ref.node.args)
+ name = getattr(macro_ref.node, "name", None)
+ if len(macro_ref.node.args) == 1:
+ arg_tuple += ","
+ self.write(
+ f"Macro(environment, macro, {name!r}, ({arg_tuple}),"
+ f" {macro_ref.accesses_kwargs!r}, {macro_ref.accesses_varargs!r},"
+ f" {macro_ref.accesses_caller!r}, context.eval_ctx.autoescape)"
+ )
+
+ def position(self, node: nodes.Node) -> str:
+ """Return a human readable position for the node."""
+ rv = f"line {node.lineno}"
+ if self.name is not None:
+ rv = f"{rv} in {self.name!r}"
+ return rv
+
+ def dump_local_context(self, frame: Frame) -> str:
+ items_kv = ", ".join(
+ f"{name!r}: {target}"
+ for name, target in frame.symbols.dump_stores().items()
+ )
+ return f"{{{items_kv}}}"
+
+ def write_commons(self) -> None:
+ """Writes a common preamble that is used by root and block functions.
+ Primarily this sets up common local helpers and enforces a generator
+ through a dead branch.
+ """
+ self.writeline("resolve = context.resolve_or_missing")
+ self.writeline("undefined = environment.undefined")
+ self.writeline("concat = environment.concat")
+ # always use the standard Undefined class for the implicit else of
+ # conditional expressions
+ self.writeline("cond_expr_undefined = Undefined")
+ self.writeline("if 0: yield None")
+
+ def push_parameter_definitions(self, frame: Frame) -> None:
+ """Pushes all parameter targets from the given frame into a local
+ stack that permits tracking of yet to be assigned parameters. In
+ particular this enables the optimization from `visit_Name` to skip
+ undefined expressions for parameters in macros as macros can reference
+ otherwise unbound parameters.
+ """
+ self._param_def_block.append(frame.symbols.dump_param_targets())
+
+ def pop_parameter_definitions(self) -> None:
+ """Pops the current parameter definitions set."""
+ self._param_def_block.pop()
+
+ def mark_parameter_stored(self, target: str) -> None:
+ """Marks a parameter in the current parameter definitions as stored.
+ This will skip the enforced undefined checks.
+ """
+ if self._param_def_block:
+ self._param_def_block[-1].discard(target)
+
+ def push_context_reference(self, target: str) -> None:
+ self._context_reference_stack.append(target)
+
+ def pop_context_reference(self) -> None:
+ self._context_reference_stack.pop()
+
+ def get_context_ref(self) -> str:
+ return self._context_reference_stack[-1]
+
+ def get_resolve_func(self) -> str:
+ target = self._context_reference_stack[-1]
+ if target == "context":
+ return "resolve"
+ return f"{target}.resolve"
+
+ def derive_context(self, frame: Frame) -> str:
+ return f"{self.get_context_ref()}.derived({self.dump_local_context(frame)})"
+
+ def parameter_is_undeclared(self, target: str) -> bool:
+ """Checks if a given target is an undeclared parameter."""
+ if not self._param_def_block:
+ return False
+ return target in self._param_def_block[-1]
+
+ def push_assign_tracking(self) -> None:
+ """Pushes a new layer for assignment tracking."""
+ self._assign_stack.append(set())
+
+ def pop_assign_tracking(self, frame: Frame) -> None:
+ """Pops the topmost level for assignment tracking and updates the
+ context variables if necessary.
+ """
+ vars = self._assign_stack.pop()
+ if (
+ not frame.block_frame
+ and not frame.loop_frame
+ and not frame.toplevel
+ or not vars
+ ):
+ return
+ public_names = [x for x in vars if x[:1] != "_"]
+ if len(vars) == 1:
+ name = next(iter(vars))
+ ref = frame.symbols.ref(name)
+ if frame.loop_frame:
+ self.writeline(f"_loop_vars[{name!r}] = {ref}")
+ return
+ if frame.block_frame:
+ self.writeline(f"_block_vars[{name!r}] = {ref}")
+ return
+ self.writeline(f"context.vars[{name!r}] = {ref}")
+ else:
+ if frame.loop_frame:
+ self.writeline("_loop_vars.update({")
+ elif frame.block_frame:
+ self.writeline("_block_vars.update({")
+ else:
+ self.writeline("context.vars.update({")
+ for idx, name in enumerate(sorted(vars)):
+ if idx:
+ self.write(", ")
+ ref = frame.symbols.ref(name)
+ self.write(f"{name!r}: {ref}")
+ self.write("})")
+ if not frame.block_frame and not frame.loop_frame and public_names:
+ if len(public_names) == 1:
+ self.writeline(f"context.exported_vars.add({public_names[0]!r})")
+ else:
+ names_str = ", ".join(map(repr, sorted(public_names)))
+ self.writeline(f"context.exported_vars.update(({names_str}))")
+
+ # -- Statement Visitors
+
+ def visit_Template(
+ self, node: nodes.Template, frame: t.Optional[Frame] = None
+ ) -> None:
+ assert frame is None, "no root frame allowed"
+ eval_ctx = EvalContext(self.environment, self.name)
+
+ from .runtime import async_exported
+ from .runtime import exported
+
+ if self.environment.is_async:
+ exported_names = sorted(exported + async_exported)
+ else:
+ exported_names = sorted(exported)
+
+ self.writeline("from jinja2.runtime import " + ", ".join(exported_names))
+
+ # if we want a deferred initialization we cannot move the
+ # environment into a local name
+ envenv = "" if self.defer_init else ", environment=environment"
+
+ # do we have an extends tag at all? If not, we can save some
+ # overhead by just not processing any inheritance code.
+ have_extends = node.find(nodes.Extends) is not None
+
+ # find all blocks
+ for block in node.find_all(nodes.Block):
+ if block.name in self.blocks:
+ self.fail(f"block {block.name!r} defined twice", block.lineno)
+ self.blocks[block.name] = block
+
+ # find all imports and import them
+ for import_ in node.find_all(nodes.ImportedName):
+ if import_.importname not in self.import_aliases:
+ imp = import_.importname
+ self.import_aliases[imp] = alias = self.temporary_identifier()
+ if "." in imp:
+ module, obj = imp.rsplit(".", 1)
+ self.writeline(f"from {module} import {obj} as {alias}")
+ else:
+ self.writeline(f"import {imp} as {alias}")
+
+ # add the load name
+ self.writeline(f"name = {self.name!r}")
+
+ # generate the root render function.
+ self.writeline(
+ f"{self.func('root')}(context, missing=missing{envenv}):", extra=1
+ )
+ self.indent()
+ self.write_commons()
+
+ # process the root
+ frame = Frame(eval_ctx)
+ if "self" in find_undeclared(node.body, ("self",)):
+ ref = frame.symbols.declare_parameter("self")
+ self.writeline(f"{ref} = TemplateReference(context)")
+ frame.symbols.analyze_node(node)
+ frame.toplevel = frame.rootlevel = True
+ frame.require_output_check = have_extends and not self.has_known_extends
+ if have_extends:
+ self.writeline("parent_template = None")
+ self.enter_frame(frame)
+ self.pull_dependencies(node.body)
+ self.blockvisit(node.body, frame)
+ self.leave_frame(frame, with_python_scope=True)
+ self.outdent()
+
+ # make sure that the parent root is called.
+ if have_extends:
+ if not self.has_known_extends:
+ self.indent()
+ self.writeline("if parent_template is not None:")
+ self.indent()
+ if not self.environment.is_async:
+ self.writeline("yield from parent_template.root_render_func(context)")
+ else:
+ self.writeline("agen = parent_template.root_render_func(context)")
+ self.writeline("try:")
+ self.indent()
+ self.writeline("async for event in agen:")
+ self.indent()
+ self.writeline("yield event")
+ self.outdent()
+ self.outdent()
+ self.writeline("finally: await agen.aclose()")
+ self.outdent(1 + (not self.has_known_extends))
+
+ # at this point we now have the blocks collected and can visit them too.
+ for name, block in self.blocks.items():
+ self.writeline(
+ f"{self.func('block_' + name)}(context, missing=missing{envenv}):",
+ block,
+ 1,
+ )
+ self.indent()
+ self.write_commons()
+ # It's important that we do not make this frame a child of the
+ # toplevel template. This would cause a variety of
+ # interesting issues with identifier tracking.
+ block_frame = Frame(eval_ctx)
+ block_frame.block_frame = True
+ undeclared = find_undeclared(block.body, ("self", "super"))
+ if "self" in undeclared:
+ ref = block_frame.symbols.declare_parameter("self")
+ self.writeline(f"{ref} = TemplateReference(context)")
+ if "super" in undeclared:
+ ref = block_frame.symbols.declare_parameter("super")
+ self.writeline(f"{ref} = context.super({name!r}, block_{name})")
+ block_frame.symbols.analyze_node(block)
+ block_frame.block = name
+ self.writeline("_block_vars = {}")
+ self.enter_frame(block_frame)
+ self.pull_dependencies(block.body)
+ self.blockvisit(block.body, block_frame)
+ self.leave_frame(block_frame, with_python_scope=True)
+ self.outdent()
+
+ blocks_kv_str = ", ".join(f"{x!r}: block_{x}" for x in self.blocks)
+ self.writeline(f"blocks = {{{blocks_kv_str}}}", extra=1)
+ debug_kv_str = "&".join(f"{k}={v}" for k, v in self.debug_info)
+ self.writeline(f"debug_info = {debug_kv_str!r}")
+
+ def visit_Block(self, node: nodes.Block, frame: Frame) -> None:
+ """Call a block and register it for the template."""
+ level = 0
+ if frame.toplevel:
+ # if we know that we are a child template, there is no need to
+ # check if we are one
+ if self.has_known_extends:
+ return
+ if self.extends_so_far > 0:
+ self.writeline("if parent_template is None:")
+ self.indent()
+ level += 1
+
+ if node.scoped:
+ context = self.derive_context(frame)
+ else:
+ context = self.get_context_ref()
+
+ if node.required:
+ self.writeline(f"if len(context.blocks[{node.name!r}]) <= 1:", node)
+ self.indent()
+ self.writeline(
+ f'raise TemplateRuntimeError("Required block {node.name!r} not found")',
+ node,
+ )
+ self.outdent()
+
+ if not self.environment.is_async and frame.buffer is None:
+ self.writeline(
+ f"yield from context.blocks[{node.name!r}][0]({context})", node
+ )
+ else:
+ self.writeline(f"gen = context.blocks[{node.name!r}][0]({context})")
+ self.writeline("try:")
+ self.indent()
+ self.writeline(
+ f"{self.choose_async()}for event in gen:",
+ node,
+ )
+ self.indent()
+ self.simple_write("event", frame)
+ self.outdent()
+ self.outdent()
+ self.writeline(
+ f"finally: {self.choose_async('await gen.aclose()', 'gen.close()')}"
+ )
+
+ self.outdent(level)
+
+ def visit_Extends(self, node: nodes.Extends, frame: Frame) -> None:
+ """Calls the extender."""
+ if not frame.toplevel:
+ self.fail("cannot use extend from a non top-level scope", node.lineno)
+
+ # if the number of extends statements in general is zero so
+ # far, we don't have to add a check if something extended
+ # the template before this one.
+ if self.extends_so_far > 0:
+ # if we have a known extends we just add a template runtime
+ # error into the generated code. We could catch that at compile
+ # time too, but i welcome it not to confuse users by throwing the
+ # same error at different times just "because we can".
+ if not self.has_known_extends:
+ self.writeline("if parent_template is not None:")
+ self.indent()
+ self.writeline('raise TemplateRuntimeError("extended multiple times")')
+
+ # if we have a known extends already we don't need that code here
+ # as we know that the template execution will end here.
+ if self.has_known_extends:
+ raise CompilerExit()
+ else:
+ self.outdent()
+
+ self.writeline("parent_template = environment.get_template(", node)
+ self.visit(node.template, frame)
+ self.write(f", {self.name!r})")
+ self.writeline("for name, parent_block in parent_template.blocks.items():")
+ self.indent()
+ self.writeline("context.blocks.setdefault(name, []).append(parent_block)")
+ self.outdent()
+
+ # if this extends statement was in the root level we can take
+ # advantage of that information and simplify the generated code
+ # in the top level from this point onwards
+ if frame.rootlevel:
+ self.has_known_extends = True
+
+ # and now we have one more
+ self.extends_so_far += 1
+
+ def visit_Include(self, node: nodes.Include, frame: Frame) -> None:
+ """Handles includes."""
+ if node.ignore_missing:
+ self.writeline("try:")
+ self.indent()
+
+ func_name = "get_or_select_template"
+ if isinstance(node.template, nodes.Const):
+ if isinstance(node.template.value, str):
+ func_name = "get_template"
+ elif isinstance(node.template.value, (tuple, list)):
+ func_name = "select_template"
+ elif isinstance(node.template, (nodes.Tuple, nodes.List)):
+ func_name = "select_template"
+
+ self.writeline(f"template = environment.{func_name}(", node)
+ self.visit(node.template, frame)
+ self.write(f", {self.name!r})")
+ if node.ignore_missing:
+ self.outdent()
+ self.writeline("except TemplateNotFound:")
+ self.indent()
+ self.writeline("pass")
+ self.outdent()
+ self.writeline("else:")
+ self.indent()
+
+ def loop_body() -> None:
+ self.indent()
+ self.simple_write("event", frame)
+ self.outdent()
+
+ if node.with_context:
+ self.writeline(
+ f"gen = template.root_render_func("
+ "template.new_context(context.get_all(), True,"
+ f" {self.dump_local_context(frame)}))"
+ )
+ self.writeline("try:")
+ self.indent()
+ self.writeline(f"{self.choose_async()}for event in gen:")
+ loop_body()
+ self.outdent()
+ self.writeline(
+ f"finally: {self.choose_async('await gen.aclose()', 'gen.close()')}"
+ )
+ elif self.environment.is_async:
+ self.writeline(
+ "for event in (await template._get_default_module_async())"
+ "._body_stream:"
+ )
+ loop_body()
+ else:
+ self.writeline("yield from template._get_default_module()._body_stream")
+
+ if node.ignore_missing:
+ self.outdent()
+
+ def _import_common(
+ self, node: t.Union[nodes.Import, nodes.FromImport], frame: Frame
+ ) -> None:
+ self.write(f"{self.choose_async('await ')}environment.get_template(")
+ self.visit(node.template, frame)
+ self.write(f", {self.name!r}).")
+
+ if node.with_context:
+ f_name = f"make_module{self.choose_async('_async')}"
+ self.write(
+ f"{f_name}(context.get_all(), True, {self.dump_local_context(frame)})"
+ )
+ else:
+ self.write(f"_get_default_module{self.choose_async('_async')}(context)")
+
+ def visit_Import(self, node: nodes.Import, frame: Frame) -> None:
+ """Visit regular imports."""
+ self.writeline(f"{frame.symbols.ref(node.target)} = ", node)
+ if frame.toplevel:
+ self.write(f"context.vars[{node.target!r}] = ")
+
+ self._import_common(node, frame)
+
+ if frame.toplevel and not node.target.startswith("_"):
+ self.writeline(f"context.exported_vars.discard({node.target!r})")
+
+ def visit_FromImport(self, node: nodes.FromImport, frame: Frame) -> None:
+ """Visit named imports."""
+ self.newline(node)
+ self.write("included_template = ")
+ self._import_common(node, frame)
+ var_names = []
+ discarded_names = []
+ for name in node.names:
+ if isinstance(name, tuple):
+ name, alias = name
+ else:
+ alias = name
+ self.writeline(
+ f"{frame.symbols.ref(alias)} ="
+ f" getattr(included_template, {name!r}, missing)"
+ )
+ self.writeline(f"if {frame.symbols.ref(alias)} is missing:")
+ self.indent()
+ # The position will contain the template name, and will be formatted
+ # into a string that will be compiled into an f-string. Curly braces
+ # in the name must be replaced with escapes so that they will not be
+ # executed as part of the f-string.
+ position = self.position(node).replace("{", "{{").replace("}", "}}")
+ message = (
+ "the template {included_template.__name__!r}"
+ f" (imported on {position})"
+ f" does not export the requested name {name!r}"
+ )
+ self.writeline(
+ f"{frame.symbols.ref(alias)} = undefined(f{message!r}, name={name!r})"
+ )
+ self.outdent()
+ if frame.toplevel:
+ var_names.append(alias)
+ if not alias.startswith("_"):
+ discarded_names.append(alias)
+
+ if var_names:
+ if len(var_names) == 1:
+ name = var_names[0]
+ self.writeline(f"context.vars[{name!r}] = {frame.symbols.ref(name)}")
+ else:
+ names_kv = ", ".join(
+ f"{name!r}: {frame.symbols.ref(name)}" for name in var_names
+ )
+ self.writeline(f"context.vars.update({{{names_kv}}})")
+ if discarded_names:
+ if len(discarded_names) == 1:
+ self.writeline(f"context.exported_vars.discard({discarded_names[0]!r})")
+ else:
+ names_str = ", ".join(map(repr, discarded_names))
+ self.writeline(
+ f"context.exported_vars.difference_update(({names_str}))"
+ )
+
+ def visit_For(self, node: nodes.For, frame: Frame) -> None:
+ loop_frame = frame.inner()
+ loop_frame.loop_frame = True
+ test_frame = frame.inner()
+ else_frame = frame.inner()
+
+ # try to figure out if we have an extended loop. An extended loop
+ # is necessary if the loop is in recursive mode if the special loop
+ # variable is accessed in the body if the body is a scoped block.
+ extended_loop = (
+ node.recursive
+ or "loop"
+ in find_undeclared(node.iter_child_nodes(only=("body",)), ("loop",))
+ or any(block.scoped for block in node.find_all(nodes.Block))
+ )
+
+ loop_ref = None
+ if extended_loop:
+ loop_ref = loop_frame.symbols.declare_parameter("loop")
+
+ loop_frame.symbols.analyze_node(node, for_branch="body")
+ if node.else_:
+ else_frame.symbols.analyze_node(node, for_branch="else")
+
+ if node.test:
+ loop_filter_func = self.temporary_identifier()
+ test_frame.symbols.analyze_node(node, for_branch="test")
+ self.writeline(f"{self.func(loop_filter_func)}(fiter):", node.test)
+ self.indent()
+ self.enter_frame(test_frame)
+ self.writeline(self.choose_async("async for ", "for "))
+ self.visit(node.target, loop_frame)
+ self.write(" in ")
+ self.write(self.choose_async("auto_aiter(fiter)", "fiter"))
+ self.write(":")
+ self.indent()
+ self.writeline("if ", node.test)
+ self.visit(node.test, test_frame)
+ self.write(":")
+ self.indent()
+ self.writeline("yield ")
+ self.visit(node.target, loop_frame)
+ self.outdent(3)
+ self.leave_frame(test_frame, with_python_scope=True)
+
+ # if we don't have an recursive loop we have to find the shadowed
+ # variables at that point. Because loops can be nested but the loop
+ # variable is a special one we have to enforce aliasing for it.
+ if node.recursive:
+ self.writeline(
+ f"{self.func('loop')}(reciter, loop_render_func, depth=0):", node
+ )
+ self.indent()
+ self.buffer(loop_frame)
+
+ # Use the same buffer for the else frame
+ else_frame.buffer = loop_frame.buffer
+
+ # make sure the loop variable is a special one and raise a template
+ # assertion error if a loop tries to write to loop
+ if extended_loop:
+ self.writeline(f"{loop_ref} = missing")
+
+ for name in node.find_all(nodes.Name):
+ if name.ctx == "store" and name.name == "loop":
+ self.fail(
+ "Can't assign to special loop variable in for-loop target",
+ name.lineno,
+ )
+
+ if node.else_:
+ iteration_indicator = self.temporary_identifier()
+ self.writeline(f"{iteration_indicator} = 1")
+
+ self.writeline(self.choose_async("async for ", "for "), node)
+ self.visit(node.target, loop_frame)
+ if extended_loop:
+ self.write(f", {loop_ref} in {self.choose_async('Async')}LoopContext(")
+ else:
+ self.write(" in ")
+
+ if node.test:
+ self.write(f"{loop_filter_func}(")
+ if node.recursive:
+ self.write("reciter")
+ else:
+ if self.environment.is_async and not extended_loop:
+ self.write("auto_aiter(")
+ self.visit(node.iter, frame)
+ if self.environment.is_async and not extended_loop:
+ self.write(")")
+ if node.test:
+ self.write(")")
+
+ if node.recursive:
+ self.write(", undefined, loop_render_func, depth):")
+ else:
+ self.write(", undefined):" if extended_loop else ":")
+
+ self.indent()
+ self.enter_frame(loop_frame)
+
+ self.writeline("_loop_vars = {}")
+ self.blockvisit(node.body, loop_frame)
+ if node.else_:
+ self.writeline(f"{iteration_indicator} = 0")
+ self.outdent()
+ self.leave_frame(
+ loop_frame, with_python_scope=node.recursive and not node.else_
+ )
+
+ if node.else_:
+ self.writeline(f"if {iteration_indicator}:")
+ self.indent()
+ self.enter_frame(else_frame)
+ self.blockvisit(node.else_, else_frame)
+ self.leave_frame(else_frame)
+ self.outdent()
+
+ # if the node was recursive we have to return the buffer contents
+ # and start the iteration code
+ if node.recursive:
+ self.return_buffer_contents(loop_frame)
+ self.outdent()
+ self.start_write(frame, node)
+ self.write(f"{self.choose_async('await ')}loop(")
+ if self.environment.is_async:
+ self.write("auto_aiter(")
+ self.visit(node.iter, frame)
+ if self.environment.is_async:
+ self.write(")")
+ self.write(", loop)")
+ self.end_write(frame)
+
+ # at the end of the iteration, clear any assignments made in the
+ # loop from the top level
+ if self._assign_stack:
+ self._assign_stack[-1].difference_update(loop_frame.symbols.stores)
+
+ def visit_If(self, node: nodes.If, frame: Frame) -> None:
+ if_frame = frame.soft()
+ self.writeline("if ", node)
+ self.visit(node.test, if_frame)
+ self.write(":")
+ self.indent()
+ self.blockvisit(node.body, if_frame)
+ self.outdent()
+ for elif_ in node.elif_:
+ self.writeline("elif ", elif_)
+ self.visit(elif_.test, if_frame)
+ self.write(":")
+ self.indent()
+ self.blockvisit(elif_.body, if_frame)
+ self.outdent()
+ if node.else_:
+ self.writeline("else:")
+ self.indent()
+ self.blockvisit(node.else_, if_frame)
+ self.outdent()
+
+ def visit_Macro(self, node: nodes.Macro, frame: Frame) -> None:
+ macro_frame, macro_ref = self.macro_body(node, frame)
+ self.newline()
+ if frame.toplevel:
+ if not node.name.startswith("_"):
+ self.write(f"context.exported_vars.add({node.name!r})")
+ self.writeline(f"context.vars[{node.name!r}] = ")
+ self.write(f"{frame.symbols.ref(node.name)} = ")
+ self.macro_def(macro_ref, macro_frame)
+
+ def visit_CallBlock(self, node: nodes.CallBlock, frame: Frame) -> None:
+ call_frame, macro_ref = self.macro_body(node, frame)
+ self.writeline("caller = ")
+ self.macro_def(macro_ref, call_frame)
+ self.start_write(frame, node)
+ self.visit_Call(node.call, frame, forward_caller=True)
+ self.end_write(frame)
+
+ def visit_FilterBlock(self, node: nodes.FilterBlock, frame: Frame) -> None:
+ filter_frame = frame.inner()
+ filter_frame.symbols.analyze_node(node)
+ self.enter_frame(filter_frame)
+ self.buffer(filter_frame)
+ self.blockvisit(node.body, filter_frame)
+ self.start_write(frame, node)
+ self.visit_Filter(node.filter, filter_frame)
+ self.end_write(frame)
+ self.leave_frame(filter_frame)
+
+ def visit_With(self, node: nodes.With, frame: Frame) -> None:
+ with_frame = frame.inner()
+ with_frame.symbols.analyze_node(node)
+ self.enter_frame(with_frame)
+ for target, expr in zip(node.targets, node.values):
+ self.newline()
+ self.visit(target, with_frame)
+ self.write(" = ")
+ self.visit(expr, frame)
+ self.blockvisit(node.body, with_frame)
+ self.leave_frame(with_frame)
+
+ def visit_ExprStmt(self, node: nodes.ExprStmt, frame: Frame) -> None:
+ self.newline(node)
+ self.visit(node.node, frame)
+
+ class _FinalizeInfo(t.NamedTuple):
+ const: t.Optional[t.Callable[..., str]]
+ src: t.Optional[str]
+
+ @staticmethod
+ def _default_finalize(value: t.Any) -> t.Any:
+ """The default finalize function if the environment isn't
+ configured with one. Or, if the environment has one, this is
+ called on that function's output for constants.
+ """
+ return str(value)
+
+ _finalize: t.Optional[_FinalizeInfo] = None
+
+ def _make_finalize(self) -> _FinalizeInfo:
+ """Build the finalize function to be used on constants and at
+ runtime. Cached so it's only created once for all output nodes.
+
+ Returns a ``namedtuple`` with the following attributes:
+
+ ``const``
+ A function to finalize constant data at compile time.
+
+ ``src``
+ Source code to output around nodes to be evaluated at
+ runtime.
+ """
+ if self._finalize is not None:
+ return self._finalize
+
+ finalize: t.Optional[t.Callable[..., t.Any]]
+ finalize = default = self._default_finalize
+ src = None
+
+ if self.environment.finalize:
+ src = "environment.finalize("
+ env_finalize = self.environment.finalize
+ pass_arg = {
+ _PassArg.context: "context",
+ _PassArg.eval_context: "context.eval_ctx",
+ _PassArg.environment: "environment",
+ }.get(
+ _PassArg.from_obj(env_finalize) # type: ignore
+ )
+ finalize = None
+
+ if pass_arg is None:
+
+ def finalize(value: t.Any) -> t.Any: # noqa: F811
+ return default(env_finalize(value))
+
+ else:
+ src = f"{src}{pass_arg}, "
+
+ if pass_arg == "environment":
+
+ def finalize(value: t.Any) -> t.Any: # noqa: F811
+ return default(env_finalize(self.environment, value))
+
+ self._finalize = self._FinalizeInfo(finalize, src)
+ return self._finalize
+
+ def _output_const_repr(self, group: t.Iterable[t.Any]) -> str:
+ """Given a group of constant values converted from ``Output``
+ child nodes, produce a string to write to the template module
+ source.
+ """
+ return repr(concat(group))
+
+ def _output_child_to_const(
+ self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo
+ ) -> str:
+ """Try to optimize a child of an ``Output`` node by trying to
+ convert it to constant, finalized data at compile time.
+
+ If :exc:`Impossible` is raised, the node is not constant and
+ will be evaluated at runtime. Any other exception will also be
+ evaluated at runtime for easier debugging.
+ """
+ const = node.as_const(frame.eval_ctx)
+
+ if frame.eval_ctx.autoescape:
+ const = escape(const)
+
+ # Template data doesn't go through finalize.
+ if isinstance(node, nodes.TemplateData):
+ return str(const)
+
+ return finalize.const(const) # type: ignore
+
+ def _output_child_pre(
+ self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo
+ ) -> None:
+ """Output extra source code before visiting a child of an
+ ``Output`` node.
+ """
+ if frame.eval_ctx.volatile:
+ self.write("(escape if context.eval_ctx.autoescape else str)(")
+ elif frame.eval_ctx.autoescape:
+ self.write("escape(")
+ else:
+ self.write("str(")
+
+ if finalize.src is not None:
+ self.write(finalize.src)
+
+ def _output_child_post(
+ self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo
+ ) -> None:
+ """Output extra source code after visiting a child of an
+ ``Output`` node.
+ """
+ self.write(")")
+
+ if finalize.src is not None:
+ self.write(")")
+
+ def visit_Output(self, node: nodes.Output, frame: Frame) -> None:
+ # If an extends is active, don't render outside a block.
+ if frame.require_output_check:
+ # A top-level extends is known to exist at compile time.
+ if self.has_known_extends:
+ return
+
+ self.writeline("if parent_template is None:")
+ self.indent()
+
+ finalize = self._make_finalize()
+ body: t.List[t.Union[t.List[t.Any], nodes.Expr]] = []
+
+ # Evaluate constants at compile time if possible. Each item in
+ # body will be either a list of static data or a node to be
+ # evaluated at runtime.
+ for child in node.nodes:
+ try:
+ if not (
+ # If the finalize function requires runtime context,
+ # constants can't be evaluated at compile time.
+ finalize.const
+ # Unless it's basic template data that won't be
+ # finalized anyway.
+ or isinstance(child, nodes.TemplateData)
+ ):
+ raise nodes.Impossible()
+
+ const = self._output_child_to_const(child, frame, finalize)
+ except (nodes.Impossible, Exception):
+ # The node was not constant and needs to be evaluated at
+ # runtime. Or another error was raised, which is easier
+ # to debug at runtime.
+ body.append(child)
+ continue
+
+ if body and isinstance(body[-1], list):
+ body[-1].append(const)
+ else:
+ body.append([const])
+
+ if frame.buffer is not None:
+ if len(body) == 1:
+ self.writeline(f"{frame.buffer}.append(")
+ else:
+ self.writeline(f"{frame.buffer}.extend((")
+
+ self.indent()
+
+ for item in body:
+ if isinstance(item, list):
+ # A group of constant data to join and output.
+ val = self._output_const_repr(item)
+
+ if frame.buffer is None:
+ self.writeline("yield " + val)
+ else:
+ self.writeline(val + ",")
+ else:
+ if frame.buffer is None:
+ self.writeline("yield ", item)
+ else:
+ self.newline(item)
+
+ # A node to be evaluated at runtime.
+ self._output_child_pre(item, frame, finalize)
+ self.visit(item, frame)
+ self._output_child_post(item, frame, finalize)
+
+ if frame.buffer is not None:
+ self.write(",")
+
+ if frame.buffer is not None:
+ self.outdent()
+ self.writeline(")" if len(body) == 1 else "))")
+
+ if frame.require_output_check:
+ self.outdent()
+
+ def visit_Assign(self, node: nodes.Assign, frame: Frame) -> None:
+ self.push_assign_tracking()
+
+ # ``a.b`` is allowed for assignment, and is parsed as an NSRef. However,
+ # it is only valid if it references a Namespace object. Emit a check for
+ # that for each ref here, before assignment code is emitted. This can't
+ # be done in visit_NSRef as the ref could be in the middle of a tuple.
+ seen_refs: t.Set[str] = set()
+
+ for nsref in node.find_all(nodes.NSRef):
+ if nsref.name in seen_refs:
+ # Only emit the check for each reference once, in case the same
+ # ref is used multiple times in a tuple, `ns.a, ns.b = c, d`.
+ continue
+
+ seen_refs.add(nsref.name)
+ ref = frame.symbols.ref(nsref.name)
+ self.writeline(f"if not isinstance({ref}, Namespace):")
+ self.indent()
+ self.writeline(
+ "raise TemplateRuntimeError"
+ '("cannot assign attribute on non-namespace object")'
+ )
+ self.outdent()
+
+ self.newline(node)
+ self.visit(node.target, frame)
+ self.write(" = ")
+ self.visit(node.node, frame)
+ self.pop_assign_tracking(frame)
+
+ def visit_AssignBlock(self, node: nodes.AssignBlock, frame: Frame) -> None:
+ self.push_assign_tracking()
+ block_frame = frame.inner()
+ # This is a special case. Since a set block always captures we
+ # will disable output checks. This way one can use set blocks
+ # toplevel even in extended templates.
+ block_frame.require_output_check = False
+ block_frame.symbols.analyze_node(node)
+ self.enter_frame(block_frame)
+ self.buffer(block_frame)
+ self.blockvisit(node.body, block_frame)
+ self.newline(node)
+ self.visit(node.target, frame)
+ self.write(" = (Markup if context.eval_ctx.autoescape else identity)(")
+ if node.filter is not None:
+ self.visit_Filter(node.filter, block_frame)
+ else:
+ self.write(f"concat({block_frame.buffer})")
+ self.write(")")
+ self.pop_assign_tracking(frame)
+ self.leave_frame(block_frame)
+
+ # -- Expression Visitors
+
+ def visit_Name(self, node: nodes.Name, frame: Frame) -> None:
+ if node.ctx == "store" and (
+ frame.toplevel or frame.loop_frame or frame.block_frame
+ ):
+ if self._assign_stack:
+ self._assign_stack[-1].add(node.name)
+ ref = frame.symbols.ref(node.name)
+
+ # If we are looking up a variable we might have to deal with the
+ # case where it's undefined. We can skip that case if the load
+ # instruction indicates a parameter which are always defined.
+ if node.ctx == "load":
+ load = frame.symbols.find_load(ref)
+ if not (
+ load is not None
+ and load[0] == VAR_LOAD_PARAMETER
+ and not self.parameter_is_undeclared(ref)
+ ):
+ self.write(
+ f"(undefined(name={node.name!r}) if {ref} is missing else {ref})"
+ )
+ return
+
+ self.write(ref)
+
+ def visit_NSRef(self, node: nodes.NSRef, frame: Frame) -> None:
+ # NSRef is a dotted assignment target a.b=c, but uses a[b]=c internally.
+ # visit_Assign emits code to validate that each ref is to a Namespace
+ # object only. That can't be emitted here as the ref could be in the
+ # middle of a tuple assignment.
+ ref = frame.symbols.ref(node.name)
+ self.writeline(f"{ref}[{node.attr!r}]")
+
+ def visit_Const(self, node: nodes.Const, frame: Frame) -> None:
+ val = node.as_const(frame.eval_ctx)
+ if isinstance(val, float):
+ self.write(str(val))
+ else:
+ self.write(repr(val))
+
+ def visit_TemplateData(self, node: nodes.TemplateData, frame: Frame) -> None:
+ try:
+ self.write(repr(node.as_const(frame.eval_ctx)))
+ except nodes.Impossible:
+ self.write(
+ f"(Markup if context.eval_ctx.autoescape else identity)({node.data!r})"
+ )
+
+ def visit_Tuple(self, node: nodes.Tuple, frame: Frame) -> None:
+ self.write("(")
+ idx = -1
+ for idx, item in enumerate(node.items):
+ if idx:
+ self.write(", ")
+ self.visit(item, frame)
+ self.write(",)" if idx == 0 else ")")
+
+ def visit_List(self, node: nodes.List, frame: Frame) -> None:
+ self.write("[")
+ for idx, item in enumerate(node.items):
+ if idx:
+ self.write(", ")
+ self.visit(item, frame)
+ self.write("]")
+
+ def visit_Dict(self, node: nodes.Dict, frame: Frame) -> None:
+ self.write("{")
+ for idx, item in enumerate(node.items):
+ if idx:
+ self.write(", ")
+ self.visit(item.key, frame)
+ self.write(": ")
+ self.visit(item.value, frame)
+ self.write("}")
+
+ visit_Add = _make_binop("+")
+ visit_Sub = _make_binop("-")
+ visit_Mul = _make_binop("*")
+ visit_Div = _make_binop("/")
+ visit_FloorDiv = _make_binop("//")
+ visit_Pow = _make_binop("**")
+ visit_Mod = _make_binop("%")
+ visit_And = _make_binop("and")
+ visit_Or = _make_binop("or")
+ visit_Pos = _make_unop("+")
+ visit_Neg = _make_unop("-")
+ visit_Not = _make_unop("not ")
+
+ @optimizeconst
+ def visit_Concat(self, node: nodes.Concat, frame: Frame) -> None:
+ if frame.eval_ctx.volatile:
+ func_name = "(markup_join if context.eval_ctx.volatile else str_join)"
+ elif frame.eval_ctx.autoescape:
+ func_name = "markup_join"
+ else:
+ func_name = "str_join"
+ self.write(f"{func_name}((")
+ for arg in node.nodes:
+ self.visit(arg, frame)
+ self.write(", ")
+ self.write("))")
+
+ @optimizeconst
+ def visit_Compare(self, node: nodes.Compare, frame: Frame) -> None:
+ self.write("(")
+ self.visit(node.expr, frame)
+ for op in node.ops:
+ self.visit(op, frame)
+ self.write(")")
+
+ def visit_Operand(self, node: nodes.Operand, frame: Frame) -> None:
+ self.write(f" {operators[node.op]} ")
+ self.visit(node.expr, frame)
+
+ @optimizeconst
+ def visit_Getattr(self, node: nodes.Getattr, frame: Frame) -> None:
+ if self.environment.is_async:
+ self.write("(await auto_await(")
+
+ self.write("environment.getattr(")
+ self.visit(node.node, frame)
+ self.write(f", {node.attr!r})")
+
+ if self.environment.is_async:
+ self.write("))")
+
+ @optimizeconst
+ def visit_Getitem(self, node: nodes.Getitem, frame: Frame) -> None:
+ # slices bypass the environment getitem method.
+ if isinstance(node.arg, nodes.Slice):
+ self.visit(node.node, frame)
+ self.write("[")
+ self.visit(node.arg, frame)
+ self.write("]")
+ else:
+ if self.environment.is_async:
+ self.write("(await auto_await(")
+
+ self.write("environment.getitem(")
+ self.visit(node.node, frame)
+ self.write(", ")
+ self.visit(node.arg, frame)
+ self.write(")")
+
+ if self.environment.is_async:
+ self.write("))")
+
+ def visit_Slice(self, node: nodes.Slice, frame: Frame) -> None:
+ if node.start is not None:
+ self.visit(node.start, frame)
+ self.write(":")
+ if node.stop is not None:
+ self.visit(node.stop, frame)
+ if node.step is not None:
+ self.write(":")
+ self.visit(node.step, frame)
+
+ @contextmanager
+ def _filter_test_common(
+ self, node: t.Union[nodes.Filter, nodes.Test], frame: Frame, is_filter: bool
+ ) -> t.Iterator[None]:
+ if self.environment.is_async:
+ self.write("(await auto_await(")
+
+ if is_filter:
+ self.write(f"{self.filters[node.name]}(")
+ func = self.environment.filters.get(node.name)
+ else:
+ self.write(f"{self.tests[node.name]}(")
+ func = self.environment.tests.get(node.name)
+
+ # When inside an If or CondExpr frame, allow the filter to be
+ # undefined at compile time and only raise an error if it's
+ # actually called at runtime. See pull_dependencies.
+ if func is None and not frame.soft_frame:
+ type_name = "filter" if is_filter else "test"
+ self.fail(f"No {type_name} named {node.name!r}.", node.lineno)
+
+ pass_arg = {
+ _PassArg.context: "context",
+ _PassArg.eval_context: "context.eval_ctx",
+ _PassArg.environment: "environment",
+ }.get(
+ _PassArg.from_obj(func) # type: ignore
+ )
+
+ if pass_arg is not None:
+ self.write(f"{pass_arg}, ")
+
+ # Back to the visitor function to handle visiting the target of
+ # the filter or test.
+ yield
+
+ self.signature(node, frame)
+ self.write(")")
+
+ if self.environment.is_async:
+ self.write("))")
+
+ @optimizeconst
+ def visit_Filter(self, node: nodes.Filter, frame: Frame) -> None:
+ with self._filter_test_common(node, frame, True):
+ # if the filter node is None we are inside a filter block
+ # and want to write to the current buffer
+ if node.node is not None:
+ self.visit(node.node, frame)
+ elif frame.eval_ctx.volatile:
+ self.write(
+ f"(Markup(concat({frame.buffer}))"
+ f" if context.eval_ctx.autoescape else concat({frame.buffer}))"
+ )
+ elif frame.eval_ctx.autoescape:
+ self.write(f"Markup(concat({frame.buffer}))")
+ else:
+ self.write(f"concat({frame.buffer})")
+
+ @optimizeconst
+ def visit_Test(self, node: nodes.Test, frame: Frame) -> None:
+ with self._filter_test_common(node, frame, False):
+ self.visit(node.node, frame)
+
+ @optimizeconst
+ def visit_CondExpr(self, node: nodes.CondExpr, frame: Frame) -> None:
+ frame = frame.soft()
+
+ def write_expr2() -> None:
+ if node.expr2 is not None:
+ self.visit(node.expr2, frame)
+ return
+
+ self.write(
+ f'cond_expr_undefined("the inline if-expression on'
+ f" {self.position(node)} evaluated to false and no else"
+ f' section was defined.")'
+ )
+
+ self.write("(")
+ self.visit(node.expr1, frame)
+ self.write(" if ")
+ self.visit(node.test, frame)
+ self.write(" else ")
+ write_expr2()
+ self.write(")")
+
+ @optimizeconst
+ def visit_Call(
+ self, node: nodes.Call, frame: Frame, forward_caller: bool = False
+ ) -> None:
+ if self.environment.is_async:
+ self.write("(await auto_await(")
+ if self.environment.sandboxed:
+ self.write("environment.call(context, ")
+ else:
+ self.write("context.call(")
+ self.visit(node.node, frame)
+ extra_kwargs = {"caller": "caller"} if forward_caller else None
+ loop_kwargs = {"_loop_vars": "_loop_vars"} if frame.loop_frame else {}
+ block_kwargs = {"_block_vars": "_block_vars"} if frame.block_frame else {}
+ if extra_kwargs:
+ extra_kwargs.update(loop_kwargs, **block_kwargs)
+ elif loop_kwargs or block_kwargs:
+ extra_kwargs = dict(loop_kwargs, **block_kwargs)
+ self.signature(node, frame, extra_kwargs)
+ self.write(")")
+ if self.environment.is_async:
+ self.write("))")
+
+ def visit_Keyword(self, node: nodes.Keyword, frame: Frame) -> None:
+ self.write(node.key + "=")
+ self.visit(node.value, frame)
+
+ # -- Unused nodes for extensions
+
+ def visit_MarkSafe(self, node: nodes.MarkSafe, frame: Frame) -> None:
+ self.write("Markup(")
+ self.visit(node.expr, frame)
+ self.write(")")
+
+ def visit_MarkSafeIfAutoescape(
+ self, node: nodes.MarkSafeIfAutoescape, frame: Frame
+ ) -> None:
+ self.write("(Markup if context.eval_ctx.autoescape else identity)(")
+ self.visit(node.expr, frame)
+ self.write(")")
+
+ def visit_EnvironmentAttribute(
+ self, node: nodes.EnvironmentAttribute, frame: Frame
+ ) -> None:
+ self.write("environment." + node.name)
+
+ def visit_ExtensionAttribute(
+ self, node: nodes.ExtensionAttribute, frame: Frame
+ ) -> None:
+ self.write(f"environment.extensions[{node.identifier!r}].{node.name}")
+
+ def visit_ImportedName(self, node: nodes.ImportedName, frame: Frame) -> None:
+ self.write(self.import_aliases[node.importname])
+
+ def visit_InternalName(self, node: nodes.InternalName, frame: Frame) -> None:
+ self.write(node.name)
+
+ def visit_ContextReference(
+ self, node: nodes.ContextReference, frame: Frame
+ ) -> None:
+ self.write("context")
+
+ def visit_DerivedContextReference(
+ self, node: nodes.DerivedContextReference, frame: Frame
+ ) -> None:
+ self.write(self.derive_context(frame))
+
+ def visit_Continue(self, node: nodes.Continue, frame: Frame) -> None:
+ self.writeline("continue", node)
+
+ def visit_Break(self, node: nodes.Break, frame: Frame) -> None:
+ self.writeline("break", node)
+
+ def visit_Scope(self, node: nodes.Scope, frame: Frame) -> None:
+ scope_frame = frame.inner()
+ scope_frame.symbols.analyze_node(node)
+ self.enter_frame(scope_frame)
+ self.blockvisit(node.body, scope_frame)
+ self.leave_frame(scope_frame)
+
+ def visit_OverlayScope(self, node: nodes.OverlayScope, frame: Frame) -> None:
+ ctx = self.temporary_identifier()
+ self.writeline(f"{ctx} = {self.derive_context(frame)}")
+ self.writeline(f"{ctx}.vars = ")
+ self.visit(node.context, frame)
+ self.push_context_reference(ctx)
+
+ scope_frame = frame.inner(isolated=True)
+ scope_frame.symbols.analyze_node(node)
+ self.enter_frame(scope_frame)
+ self.blockvisit(node.body, scope_frame)
+ self.leave_frame(scope_frame)
+ self.pop_context_reference()
+
+ def visit_EvalContextModifier(
+ self, node: nodes.EvalContextModifier, frame: Frame
+ ) -> None:
+ for keyword in node.options:
+ self.writeline(f"context.eval_ctx.{keyword.key} = ")
+ self.visit(keyword.value, frame)
+ try:
+ val = keyword.value.as_const(frame.eval_ctx)
+ except nodes.Impossible:
+ frame.eval_ctx.volatile = True
+ else:
+ setattr(frame.eval_ctx, keyword.key, val)
+
+ def visit_ScopedEvalContextModifier(
+ self, node: nodes.ScopedEvalContextModifier, frame: Frame
+ ) -> None:
+ old_ctx_name = self.temporary_identifier()
+ saved_ctx = frame.eval_ctx.save()
+ self.writeline(f"{old_ctx_name} = context.eval_ctx.save()")
+ self.visit_EvalContextModifier(node, frame)
+ for child in node.body:
+ self.visit(child, frame)
+ frame.eval_ctx.revert(saved_ctx)
+ self.writeline(f"context.eval_ctx.revert({old_ctx_name})")
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/constants.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/constants.py
new file mode 100644
index 0000000..41a1c23
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/constants.py
@@ -0,0 +1,20 @@
+#: list of lorem ipsum words used by the lipsum() helper function
+LOREM_IPSUM_WORDS = """\
+a ac accumsan ad adipiscing aenean aliquam aliquet amet ante aptent arcu at
+auctor augue bibendum blandit class commodo condimentum congue consectetuer
+consequat conubia convallis cras cubilia cum curabitur curae cursus dapibus
+diam dictum dictumst dignissim dis dolor donec dui duis egestas eget eleifend
+elementum elit enim erat eros est et etiam eu euismod facilisi facilisis fames
+faucibus felis fermentum feugiat fringilla fusce gravida habitant habitasse hac
+hendrerit hymenaeos iaculis id imperdiet in inceptos integer interdum ipsum
+justo lacinia lacus laoreet lectus leo libero ligula litora lobortis lorem
+luctus maecenas magna magnis malesuada massa mattis mauris metus mi molestie
+mollis montes morbi mus nam nascetur natoque nec neque netus nibh nisi nisl non
+nonummy nostra nulla nullam nunc odio orci ornare parturient pede pellentesque
+penatibus per pharetra phasellus placerat platea porta porttitor posuere
+potenti praesent pretium primis proin pulvinar purus quam quis quisque rhoncus
+ridiculus risus rutrum sagittis sapien scelerisque sed sem semper senectus sit
+sociis sociosqu sodales sollicitudin suscipit suspendisse taciti tellus tempor
+tempus tincidunt torquent tortor tristique turpis ullamcorper ultrices
+ultricies urna ut varius vehicula vel velit venenatis vestibulum vitae vivamus
+viverra volutpat vulputate"""
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/debug.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/debug.py
new file mode 100644
index 0000000..eeeeee7
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/debug.py
@@ -0,0 +1,191 @@
+import sys
+import typing as t
+from types import CodeType
+from types import TracebackType
+
+from .exceptions import TemplateSyntaxError
+from .utils import internal_code
+from .utils import missing
+
+if t.TYPE_CHECKING:
+ from .runtime import Context
+
+
+def rewrite_traceback_stack(source: t.Optional[str] = None) -> BaseException:
+ """Rewrite the current exception to replace any tracebacks from
+ within compiled template code with tracebacks that look like they
+ came from the template source.
+
+ This must be called within an ``except`` block.
+
+ :param source: For ``TemplateSyntaxError``, the original source if
+ known.
+ :return: The original exception with the rewritten traceback.
+ """
+ _, exc_value, tb = sys.exc_info()
+ exc_value = t.cast(BaseException, exc_value)
+ tb = t.cast(TracebackType, tb)
+
+ if isinstance(exc_value, TemplateSyntaxError) and not exc_value.translated:
+ exc_value.translated = True
+ exc_value.source = source
+ # Remove the old traceback, otherwise the frames from the
+ # compiler still show up.
+ exc_value.with_traceback(None)
+ # Outside of runtime, so the frame isn't executing template
+ # code, but it still needs to point at the template.
+ tb = fake_traceback(
+ exc_value, None, exc_value.filename or "", exc_value.lineno
+ )
+ else:
+ # Skip the frame for the render function.
+ tb = tb.tb_next
+
+ stack = []
+
+ # Build the stack of traceback object, replacing any in template
+ # code with the source file and line information.
+ while tb is not None:
+ # Skip frames decorated with @internalcode. These are internal
+ # calls that aren't useful in template debugging output.
+ if tb.tb_frame.f_code in internal_code:
+ tb = tb.tb_next
+ continue
+
+ template = tb.tb_frame.f_globals.get("__jinja_template__")
+
+ if template is not None:
+ lineno = template.get_corresponding_lineno(tb.tb_lineno)
+ fake_tb = fake_traceback(exc_value, tb, template.filename, lineno)
+ stack.append(fake_tb)
+ else:
+ stack.append(tb)
+
+ tb = tb.tb_next
+
+ tb_next = None
+
+ # Assign tb_next in reverse to avoid circular references.
+ for tb in reversed(stack):
+ tb.tb_next = tb_next
+ tb_next = tb
+
+ return exc_value.with_traceback(tb_next)
+
+
+def fake_traceback( # type: ignore
+ exc_value: BaseException, tb: t.Optional[TracebackType], filename: str, lineno: int
+) -> TracebackType:
+ """Produce a new traceback object that looks like it came from the
+ template source instead of the compiled code. The filename, line
+ number, and location name will point to the template, and the local
+ variables will be the current template context.
+
+ :param exc_value: The original exception to be re-raised to create
+ the new traceback.
+ :param tb: The original traceback to get the local variables and
+ code info from.
+ :param filename: The template filename.
+ :param lineno: The line number in the template source.
+ """
+ if tb is not None:
+ # Replace the real locals with the context that would be
+ # available at that point in the template.
+ locals = get_template_locals(tb.tb_frame.f_locals)
+ locals.pop("__jinja_exception__", None)
+ else:
+ locals = {}
+
+ globals = {
+ "__name__": filename,
+ "__file__": filename,
+ "__jinja_exception__": exc_value,
+ }
+ # Raise an exception at the correct line number.
+ code: CodeType = compile(
+ "\n" * (lineno - 1) + "raise __jinja_exception__", filename, "exec"
+ )
+
+ # Build a new code object that points to the template file and
+ # replaces the location with a block name.
+ location = "template"
+
+ if tb is not None:
+ function = tb.tb_frame.f_code.co_name
+
+ if function == "root":
+ location = "top-level template code"
+ elif function.startswith("block_"):
+ location = f"block {function[6:]!r}"
+
+ if sys.version_info >= (3, 8):
+ code = code.replace(co_name=location)
+ else:
+ code = CodeType(
+ code.co_argcount,
+ code.co_kwonlyargcount,
+ code.co_nlocals,
+ code.co_stacksize,
+ code.co_flags,
+ code.co_code,
+ code.co_consts,
+ code.co_names,
+ code.co_varnames,
+ code.co_filename,
+ location,
+ code.co_firstlineno,
+ code.co_lnotab,
+ code.co_freevars,
+ code.co_cellvars,
+ )
+
+ # Execute the new code, which is guaranteed to raise, and return
+ # the new traceback without this frame.
+ try:
+ exec(code, globals, locals)
+ except BaseException:
+ return sys.exc_info()[2].tb_next # type: ignore
+
+
+def get_template_locals(real_locals: t.Mapping[str, t.Any]) -> t.Dict[str, t.Any]:
+ """Based on the runtime locals, get the context that would be
+ available at that point in the template.
+ """
+ # Start with the current template context.
+ ctx: t.Optional[Context] = real_locals.get("context")
+
+ if ctx is not None:
+ data: t.Dict[str, t.Any] = ctx.get_all().copy()
+ else:
+ data = {}
+
+ # Might be in a derived context that only sets local variables
+ # rather than pushing a context. Local variables follow the scheme
+ # l_depth_name. Find the highest-depth local that has a value for
+ # each name.
+ local_overrides: t.Dict[str, t.Tuple[int, t.Any]] = {}
+
+ for name, value in real_locals.items():
+ if not name.startswith("l_") or value is missing:
+ # Not a template variable, or no longer relevant.
+ continue
+
+ try:
+ _, depth_str, name = name.split("_", 2)
+ depth = int(depth_str)
+ except ValueError:
+ continue
+
+ cur_depth = local_overrides.get(name, (-1,))[0]
+
+ if cur_depth < depth:
+ local_overrides[name] = (depth, value)
+
+ # Modify the context with any derived context.
+ for name, (_, value) in local_overrides.items():
+ if value is missing:
+ data.pop(name, None)
+ else:
+ data[name] = value
+
+ return data
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/defaults.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/defaults.py
new file mode 100644
index 0000000..638cad3
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/defaults.py
@@ -0,0 +1,48 @@
+import typing as t
+
+from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401
+from .tests import TESTS as DEFAULT_TESTS # noqa: F401
+from .utils import Cycler
+from .utils import generate_lorem_ipsum
+from .utils import Joiner
+from .utils import Namespace
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+# defaults for the parser / lexer
+BLOCK_START_STRING = "{%"
+BLOCK_END_STRING = "%}"
+VARIABLE_START_STRING = "{{"
+VARIABLE_END_STRING = "}}"
+COMMENT_START_STRING = "{#"
+COMMENT_END_STRING = "#}"
+LINE_STATEMENT_PREFIX: t.Optional[str] = None
+LINE_COMMENT_PREFIX: t.Optional[str] = None
+TRIM_BLOCKS = False
+LSTRIP_BLOCKS = False
+NEWLINE_SEQUENCE: "te.Literal['\\n', '\\r\\n', '\\r']" = "\n"
+KEEP_TRAILING_NEWLINE = False
+
+# default filters, tests and namespace
+
+DEFAULT_NAMESPACE = {
+ "range": range,
+ "dict": dict,
+ "lipsum": generate_lorem_ipsum,
+ "cycler": Cycler,
+ "joiner": Joiner,
+ "namespace": Namespace,
+}
+
+# default policies
+DEFAULT_POLICIES: t.Dict[str, t.Any] = {
+ "compiler.ascii_str": True,
+ "urlize.rel": "noopener",
+ "urlize.target": None,
+ "urlize.extra_schemes": None,
+ "truncate.leeway": 5,
+ "json.dumps_function": None,
+ "json.dumps_kwargs": {"sort_keys": True},
+ "ext.i18n.trimmed": False,
+}
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/environment.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/environment.py
new file mode 100644
index 0000000..0fc6e5b
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/environment.py
@@ -0,0 +1,1672 @@
+"""Classes for managing templates and their runtime and compile time
+options.
+"""
+
+import os
+import typing
+import typing as t
+import weakref
+from collections import ChainMap
+from functools import lru_cache
+from functools import partial
+from functools import reduce
+from types import CodeType
+
+from markupsafe import Markup
+
+from . import nodes
+from .compiler import CodeGenerator
+from .compiler import generate
+from .defaults import BLOCK_END_STRING
+from .defaults import BLOCK_START_STRING
+from .defaults import COMMENT_END_STRING
+from .defaults import COMMENT_START_STRING
+from .defaults import DEFAULT_FILTERS # type: ignore[attr-defined]
+from .defaults import DEFAULT_NAMESPACE
+from .defaults import DEFAULT_POLICIES
+from .defaults import DEFAULT_TESTS # type: ignore[attr-defined]
+from .defaults import KEEP_TRAILING_NEWLINE
+from .defaults import LINE_COMMENT_PREFIX
+from .defaults import LINE_STATEMENT_PREFIX
+from .defaults import LSTRIP_BLOCKS
+from .defaults import NEWLINE_SEQUENCE
+from .defaults import TRIM_BLOCKS
+from .defaults import VARIABLE_END_STRING
+from .defaults import VARIABLE_START_STRING
+from .exceptions import TemplateNotFound
+from .exceptions import TemplateRuntimeError
+from .exceptions import TemplatesNotFound
+from .exceptions import TemplateSyntaxError
+from .exceptions import UndefinedError
+from .lexer import get_lexer
+from .lexer import Lexer
+from .lexer import TokenStream
+from .nodes import EvalContext
+from .parser import Parser
+from .runtime import Context
+from .runtime import new_context
+from .runtime import Undefined
+from .utils import _PassArg
+from .utils import concat
+from .utils import consume
+from .utils import import_string
+from .utils import internalcode
+from .utils import LRUCache
+from .utils import missing
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .bccache import BytecodeCache
+ from .ext import Extension
+ from .loaders import BaseLoader
+
+_env_bound = t.TypeVar("_env_bound", bound="Environment")
+
+
+# for direct template usage we have up to ten living environments
+@lru_cache(maxsize=10)
+def get_spontaneous_environment(cls: t.Type[_env_bound], *args: t.Any) -> _env_bound:
+ """Return a new spontaneous environment. A spontaneous environment
+ is used for templates created directly rather than through an
+ existing environment.
+
+ :param cls: Environment class to create.
+ :param args: Positional arguments passed to environment.
+ """
+ env = cls(*args)
+ env.shared = True
+ return env
+
+
+def create_cache(
+ size: int,
+) -> t.Optional[t.MutableMapping[t.Tuple["weakref.ref[t.Any]", str], "Template"]]:
+ """Return the cache class for the given size."""
+ if size == 0:
+ return None
+
+ if size < 0:
+ return {}
+
+ return LRUCache(size) # type: ignore
+
+
+def copy_cache(
+ cache: t.Optional[t.MutableMapping[t.Any, t.Any]],
+) -> t.Optional[t.MutableMapping[t.Tuple["weakref.ref[t.Any]", str], "Template"]]:
+ """Create an empty copy of the given cache."""
+ if cache is None:
+ return None
+
+ if type(cache) is dict: # noqa E721
+ return {}
+
+ return LRUCache(cache.capacity) # type: ignore
+
+
+def load_extensions(
+ environment: "Environment",
+ extensions: t.Sequence[t.Union[str, t.Type["Extension"]]],
+) -> t.Dict[str, "Extension"]:
+ """Load the extensions from the list and bind it to the environment.
+ Returns a dict of instantiated extensions.
+ """
+ result = {}
+
+ for extension in extensions:
+ if isinstance(extension, str):
+ extension = t.cast(t.Type["Extension"], import_string(extension))
+
+ result[extension.identifier] = extension(environment)
+
+ return result
+
+
+def _environment_config_check(environment: _env_bound) -> _env_bound:
+ """Perform a sanity check on the environment."""
+ assert issubclass(
+ environment.undefined, Undefined
+ ), "'undefined' must be a subclass of 'jinja2.Undefined'."
+ assert (
+ environment.block_start_string
+ != environment.variable_start_string
+ != environment.comment_start_string
+ ), "block, variable and comment start strings must be different."
+ assert environment.newline_sequence in {
+ "\r",
+ "\r\n",
+ "\n",
+ }, "'newline_sequence' must be one of '\\n', '\\r\\n', or '\\r'."
+ return environment
+
+
+class Environment:
+ r"""The core component of Jinja is the `Environment`. It contains
+ important shared variables like configuration, filters, tests,
+ globals and others. Instances of this class may be modified if
+ they are not shared and if no template was loaded so far.
+ Modifications on environments after the first template was loaded
+ will lead to surprising effects and undefined behavior.
+
+ Here are the possible initialization parameters:
+
+ `block_start_string`
+ The string marking the beginning of a block. Defaults to ``'{%'``.
+
+ `block_end_string`
+ The string marking the end of a block. Defaults to ``'%}'``.
+
+ `variable_start_string`
+ The string marking the beginning of a print statement.
+ Defaults to ``'{{'``.
+
+ `variable_end_string`
+ The string marking the end of a print statement. Defaults to
+ ``'}}'``.
+
+ `comment_start_string`
+ The string marking the beginning of a comment. Defaults to ``'{#'``.
+
+ `comment_end_string`
+ The string marking the end of a comment. Defaults to ``'#}'``.
+
+ `line_statement_prefix`
+ If given and a string, this will be used as prefix for line based
+ statements. See also :ref:`line-statements`.
+
+ `line_comment_prefix`
+ If given and a string, this will be used as prefix for line based
+ comments. See also :ref:`line-statements`.
+
+ .. versionadded:: 2.2
+
+ `trim_blocks`
+ If this is set to ``True`` the first newline after a block is
+ removed (block, not variable tag!). Defaults to `False`.
+
+ `lstrip_blocks`
+ If this is set to ``True`` leading spaces and tabs are stripped
+ from the start of a line to a block. Defaults to `False`.
+
+ `newline_sequence`
+ The sequence that starts a newline. Must be one of ``'\r'``,
+ ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a
+ useful default for Linux and OS X systems as well as web
+ applications.
+
+ `keep_trailing_newline`
+ Preserve the trailing newline when rendering templates.
+ The default is ``False``, which causes a single newline,
+ if present, to be stripped from the end of the template.
+
+ .. versionadded:: 2.7
+
+ `extensions`
+ List of Jinja extensions to use. This can either be import paths
+ as strings or extension classes. For more information have a
+ look at :ref:`the extensions documentation `.
+
+ `optimized`
+ should the optimizer be enabled? Default is ``True``.
+
+ `undefined`
+ :class:`Undefined` or a subclass of it that is used to represent
+ undefined values in the template.
+
+ `finalize`
+ A callable that can be used to process the result of a variable
+ expression before it is output. For example one can convert
+ ``None`` implicitly into an empty string here.
+
+ `autoescape`
+ If set to ``True`` the XML/HTML autoescaping feature is enabled by
+ default. For more details about autoescaping see
+ :class:`~markupsafe.Markup`. As of Jinja 2.4 this can also
+ be a callable that is passed the template name and has to
+ return ``True`` or ``False`` depending on autoescape should be
+ enabled by default.
+
+ .. versionchanged:: 2.4
+ `autoescape` can now be a function
+
+ `loader`
+ The template loader for this environment.
+
+ `cache_size`
+ The size of the cache. Per default this is ``400`` which means
+ that if more than 400 templates are loaded the loader will clean
+ out the least recently used template. If the cache size is set to
+ ``0`` templates are recompiled all the time, if the cache size is
+ ``-1`` the cache will not be cleaned.
+
+ .. versionchanged:: 2.8
+ The cache size was increased to 400 from a low 50.
+
+ `auto_reload`
+ Some loaders load templates from locations where the template
+ sources may change (ie: file system or database). If
+ ``auto_reload`` is set to ``True`` (default) every time a template is
+ requested the loader checks if the source changed and if yes, it
+ will reload the template. For higher performance it's possible to
+ disable that.
+
+ `bytecode_cache`
+ If set to a bytecode cache object, this object will provide a
+ cache for the internal Jinja bytecode so that templates don't
+ have to be parsed if they were not changed.
+
+ See :ref:`bytecode-cache` for more information.
+
+ `enable_async`
+ If set to true this enables async template execution which
+ allows using async functions and generators.
+ """
+
+ #: if this environment is sandboxed. Modifying this variable won't make
+ #: the environment sandboxed though. For a real sandboxed environment
+ #: have a look at jinja2.sandbox. This flag alone controls the code
+ #: generation by the compiler.
+ sandboxed = False
+
+ #: True if the environment is just an overlay
+ overlayed = False
+
+ #: the environment this environment is linked to if it is an overlay
+ linked_to: t.Optional["Environment"] = None
+
+ #: shared environments have this set to `True`. A shared environment
+ #: must not be modified
+ shared = False
+
+ #: the class that is used for code generation. See
+ #: :class:`~jinja2.compiler.CodeGenerator` for more information.
+ code_generator_class: t.Type["CodeGenerator"] = CodeGenerator
+
+ concat = "".join
+
+ #: the context class that is used for templates. See
+ #: :class:`~jinja2.runtime.Context` for more information.
+ context_class: t.Type[Context] = Context
+
+ template_class: t.Type["Template"]
+
+ def __init__(
+ self,
+ block_start_string: str = BLOCK_START_STRING,
+ block_end_string: str = BLOCK_END_STRING,
+ variable_start_string: str = VARIABLE_START_STRING,
+ variable_end_string: str = VARIABLE_END_STRING,
+ comment_start_string: str = COMMENT_START_STRING,
+ comment_end_string: str = COMMENT_END_STRING,
+ line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX,
+ line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX,
+ trim_blocks: bool = TRIM_BLOCKS,
+ lstrip_blocks: bool = LSTRIP_BLOCKS,
+ newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE,
+ keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE,
+ extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (),
+ optimized: bool = True,
+ undefined: t.Type[Undefined] = Undefined,
+ finalize: t.Optional[t.Callable[..., t.Any]] = None,
+ autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False,
+ loader: t.Optional["BaseLoader"] = None,
+ cache_size: int = 400,
+ auto_reload: bool = True,
+ bytecode_cache: t.Optional["BytecodeCache"] = None,
+ enable_async: bool = False,
+ ):
+ # !!Important notice!!
+ # The constructor accepts quite a few arguments that should be
+ # passed by keyword rather than position. However it's important to
+ # not change the order of arguments because it's used at least
+ # internally in those cases:
+ # - spontaneous environments (i18n extension and Template)
+ # - unittests
+ # If parameter changes are required only add parameters at the end
+ # and don't change the arguments (or the defaults!) of the arguments
+ # existing already.
+
+ # lexer / parser information
+ self.block_start_string = block_start_string
+ self.block_end_string = block_end_string
+ self.variable_start_string = variable_start_string
+ self.variable_end_string = variable_end_string
+ self.comment_start_string = comment_start_string
+ self.comment_end_string = comment_end_string
+ self.line_statement_prefix = line_statement_prefix
+ self.line_comment_prefix = line_comment_prefix
+ self.trim_blocks = trim_blocks
+ self.lstrip_blocks = lstrip_blocks
+ self.newline_sequence = newline_sequence
+ self.keep_trailing_newline = keep_trailing_newline
+
+ # runtime information
+ self.undefined: t.Type[Undefined] = undefined
+ self.optimized = optimized
+ self.finalize = finalize
+ self.autoescape = autoescape
+
+ # defaults
+ self.filters = DEFAULT_FILTERS.copy()
+ self.tests = DEFAULT_TESTS.copy()
+ self.globals = DEFAULT_NAMESPACE.copy()
+
+ # set the loader provided
+ self.loader = loader
+ self.cache = create_cache(cache_size)
+ self.bytecode_cache = bytecode_cache
+ self.auto_reload = auto_reload
+
+ # configurable policies
+ self.policies = DEFAULT_POLICIES.copy()
+
+ # load extensions
+ self.extensions = load_extensions(self, extensions)
+
+ self.is_async = enable_async
+ _environment_config_check(self)
+
+ def add_extension(self, extension: t.Union[str, t.Type["Extension"]]) -> None:
+ """Adds an extension after the environment was created.
+
+ .. versionadded:: 2.5
+ """
+ self.extensions.update(load_extensions(self, [extension]))
+
+ def extend(self, **attributes: t.Any) -> None:
+ """Add the items to the instance of the environment if they do not exist
+ yet. This is used by :ref:`extensions ` to register
+ callbacks and configuration values without breaking inheritance.
+ """
+ for key, value in attributes.items():
+ if not hasattr(self, key):
+ setattr(self, key, value)
+
+ def overlay(
+ self,
+ block_start_string: str = missing,
+ block_end_string: str = missing,
+ variable_start_string: str = missing,
+ variable_end_string: str = missing,
+ comment_start_string: str = missing,
+ comment_end_string: str = missing,
+ line_statement_prefix: t.Optional[str] = missing,
+ line_comment_prefix: t.Optional[str] = missing,
+ trim_blocks: bool = missing,
+ lstrip_blocks: bool = missing,
+ newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = missing,
+ keep_trailing_newline: bool = missing,
+ extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = missing,
+ optimized: bool = missing,
+ undefined: t.Type[Undefined] = missing,
+ finalize: t.Optional[t.Callable[..., t.Any]] = missing,
+ autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = missing,
+ loader: t.Optional["BaseLoader"] = missing,
+ cache_size: int = missing,
+ auto_reload: bool = missing,
+ bytecode_cache: t.Optional["BytecodeCache"] = missing,
+ enable_async: bool = missing,
+ ) -> "te.Self":
+ """Create a new overlay environment that shares all the data with the
+ current environment except for cache and the overridden attributes.
+ Extensions cannot be removed for an overlayed environment. An overlayed
+ environment automatically gets all the extensions of the environment it
+ is linked to plus optional extra extensions.
+
+ Creating overlays should happen after the initial environment was set
+ up completely. Not all attributes are truly linked, some are just
+ copied over so modifications on the original environment may not shine
+ through.
+
+ .. versionchanged:: 3.1.5
+ ``enable_async`` is applied correctly.
+
+ .. versionchanged:: 3.1.2
+ Added the ``newline_sequence``, ``keep_trailing_newline``,
+ and ``enable_async`` parameters to match ``__init__``.
+ """
+ args = dict(locals())
+ del args["self"], args["cache_size"], args["extensions"], args["enable_async"]
+
+ rv = object.__new__(self.__class__)
+ rv.__dict__.update(self.__dict__)
+ rv.overlayed = True
+ rv.linked_to = self
+
+ for key, value in args.items():
+ if value is not missing:
+ setattr(rv, key, value)
+
+ if cache_size is not missing:
+ rv.cache = create_cache(cache_size)
+ else:
+ rv.cache = copy_cache(self.cache)
+
+ rv.extensions = {}
+ for key, value in self.extensions.items():
+ rv.extensions[key] = value.bind(rv)
+ if extensions is not missing:
+ rv.extensions.update(load_extensions(rv, extensions))
+
+ if enable_async is not missing:
+ rv.is_async = enable_async
+
+ return _environment_config_check(rv)
+
+ @property
+ def lexer(self) -> Lexer:
+ """The lexer for this environment."""
+ return get_lexer(self)
+
+ def iter_extensions(self) -> t.Iterator["Extension"]:
+ """Iterates over the extensions by priority."""
+ return iter(sorted(self.extensions.values(), key=lambda x: x.priority))
+
+ def getitem(
+ self, obj: t.Any, argument: t.Union[str, t.Any]
+ ) -> t.Union[t.Any, Undefined]:
+ """Get an item or attribute of an object but prefer the item."""
+ try:
+ return obj[argument]
+ except (AttributeError, TypeError, LookupError):
+ if isinstance(argument, str):
+ try:
+ attr = str(argument)
+ except Exception:
+ pass
+ else:
+ try:
+ return getattr(obj, attr)
+ except AttributeError:
+ pass
+ return self.undefined(obj=obj, name=argument)
+
+ def getattr(self, obj: t.Any, attribute: str) -> t.Any:
+ """Get an item or attribute of an object but prefer the attribute.
+ Unlike :meth:`getitem` the attribute *must* be a string.
+ """
+ try:
+ return getattr(obj, attribute)
+ except AttributeError:
+ pass
+ try:
+ return obj[attribute]
+ except (TypeError, LookupError, AttributeError):
+ return self.undefined(obj=obj, name=attribute)
+
+ def _filter_test_common(
+ self,
+ name: t.Union[str, Undefined],
+ value: t.Any,
+ args: t.Optional[t.Sequence[t.Any]],
+ kwargs: t.Optional[t.Mapping[str, t.Any]],
+ context: t.Optional[Context],
+ eval_ctx: t.Optional[EvalContext],
+ is_filter: bool,
+ ) -> t.Any:
+ if is_filter:
+ env_map = self.filters
+ type_name = "filter"
+ else:
+ env_map = self.tests
+ type_name = "test"
+
+ func = env_map.get(name) # type: ignore
+
+ if func is None:
+ msg = f"No {type_name} named {name!r}."
+
+ if isinstance(name, Undefined):
+ try:
+ name._fail_with_undefined_error()
+ except Exception as e:
+ msg = f"{msg} ({e}; did you forget to quote the callable name?)"
+
+ raise TemplateRuntimeError(msg)
+
+ args = [value, *(args if args is not None else ())]
+ kwargs = kwargs if kwargs is not None else {}
+ pass_arg = _PassArg.from_obj(func)
+
+ if pass_arg is _PassArg.context:
+ if context is None:
+ raise TemplateRuntimeError(
+ f"Attempted to invoke a context {type_name} without context."
+ )
+
+ args.insert(0, context)
+ elif pass_arg is _PassArg.eval_context:
+ if eval_ctx is None:
+ if context is not None:
+ eval_ctx = context.eval_ctx
+ else:
+ eval_ctx = EvalContext(self)
+
+ args.insert(0, eval_ctx)
+ elif pass_arg is _PassArg.environment:
+ args.insert(0, self)
+
+ return func(*args, **kwargs)
+
+ def call_filter(
+ self,
+ name: str,
+ value: t.Any,
+ args: t.Optional[t.Sequence[t.Any]] = None,
+ kwargs: t.Optional[t.Mapping[str, t.Any]] = None,
+ context: t.Optional[Context] = None,
+ eval_ctx: t.Optional[EvalContext] = None,
+ ) -> t.Any:
+ """Invoke a filter on a value the same way the compiler does.
+
+ This might return a coroutine if the filter is running from an
+ environment in async mode and the filter supports async
+ execution. It's your responsibility to await this if needed.
+
+ .. versionadded:: 2.7
+ """
+ return self._filter_test_common(
+ name, value, args, kwargs, context, eval_ctx, True
+ )
+
+ def call_test(
+ self,
+ name: str,
+ value: t.Any,
+ args: t.Optional[t.Sequence[t.Any]] = None,
+ kwargs: t.Optional[t.Mapping[str, t.Any]] = None,
+ context: t.Optional[Context] = None,
+ eval_ctx: t.Optional[EvalContext] = None,
+ ) -> t.Any:
+ """Invoke a test on a value the same way the compiler does.
+
+ This might return a coroutine if the test is running from an
+ environment in async mode and the test supports async execution.
+ It's your responsibility to await this if needed.
+
+ .. versionchanged:: 3.0
+ Tests support ``@pass_context``, etc. decorators. Added
+ the ``context`` and ``eval_ctx`` parameters.
+
+ .. versionadded:: 2.7
+ """
+ return self._filter_test_common(
+ name, value, args, kwargs, context, eval_ctx, False
+ )
+
+ @internalcode
+ def parse(
+ self,
+ source: str,
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ ) -> nodes.Template:
+ """Parse the sourcecode and return the abstract syntax tree. This
+ tree of nodes is used by the compiler to convert the template into
+ executable source- or bytecode. This is useful for debugging or to
+ extract information from templates.
+
+ If you are :ref:`developing Jinja extensions `
+ this gives you a good overview of the node tree generated.
+ """
+ try:
+ return self._parse(source, name, filename)
+ except TemplateSyntaxError:
+ self.handle_exception(source=source)
+
+ def _parse(
+ self, source: str, name: t.Optional[str], filename: t.Optional[str]
+ ) -> nodes.Template:
+ """Internal parsing function used by `parse` and `compile`."""
+ return Parser(self, source, name, filename).parse()
+
+ def lex(
+ self,
+ source: str,
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ ) -> t.Iterator[t.Tuple[int, str, str]]:
+ """Lex the given sourcecode and return a generator that yields
+ tokens as tuples in the form ``(lineno, token_type, value)``.
+ This can be useful for :ref:`extension development `
+ and debugging templates.
+
+ This does not perform preprocessing. If you want the preprocessing
+ of the extensions to be applied you have to filter source through
+ the :meth:`preprocess` method.
+ """
+ source = str(source)
+ try:
+ return self.lexer.tokeniter(source, name, filename)
+ except TemplateSyntaxError:
+ self.handle_exception(source=source)
+
+ def preprocess(
+ self,
+ source: str,
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ ) -> str:
+ """Preprocesses the source with all extensions. This is automatically
+ called for all parsing and compiling methods but *not* for :meth:`lex`
+ because there you usually only want the actual source tokenized.
+ """
+ return reduce(
+ lambda s, e: e.preprocess(s, name, filename),
+ self.iter_extensions(),
+ str(source),
+ )
+
+ def _tokenize(
+ self,
+ source: str,
+ name: t.Optional[str],
+ filename: t.Optional[str] = None,
+ state: t.Optional[str] = None,
+ ) -> TokenStream:
+ """Called by the parser to do the preprocessing and filtering
+ for all the extensions. Returns a :class:`~jinja2.lexer.TokenStream`.
+ """
+ source = self.preprocess(source, name, filename)
+ stream = self.lexer.tokenize(source, name, filename, state)
+
+ for ext in self.iter_extensions():
+ stream = ext.filter_stream(stream) # type: ignore
+
+ if not isinstance(stream, TokenStream):
+ stream = TokenStream(stream, name, filename)
+
+ return stream
+
+ def _generate(
+ self,
+ source: nodes.Template,
+ name: t.Optional[str],
+ filename: t.Optional[str],
+ defer_init: bool = False,
+ ) -> str:
+ """Internal hook that can be overridden to hook a different generate
+ method in.
+
+ .. versionadded:: 2.5
+ """
+ return generate( # type: ignore
+ source,
+ self,
+ name,
+ filename,
+ defer_init=defer_init,
+ optimized=self.optimized,
+ )
+
+ def _compile(self, source: str, filename: str) -> CodeType:
+ """Internal hook that can be overridden to hook a different compile
+ method in.
+
+ .. versionadded:: 2.5
+ """
+ return compile(source, filename, "exec")
+
+ @typing.overload
+ def compile(
+ self,
+ source: t.Union[str, nodes.Template],
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ raw: "te.Literal[False]" = False,
+ defer_init: bool = False,
+ ) -> CodeType: ...
+
+ @typing.overload
+ def compile(
+ self,
+ source: t.Union[str, nodes.Template],
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ raw: "te.Literal[True]" = ...,
+ defer_init: bool = False,
+ ) -> str: ...
+
+ @internalcode
+ def compile(
+ self,
+ source: t.Union[str, nodes.Template],
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ raw: bool = False,
+ defer_init: bool = False,
+ ) -> t.Union[str, CodeType]:
+ """Compile a node or template source code. The `name` parameter is
+ the load name of the template after it was joined using
+ :meth:`join_path` if necessary, not the filename on the file system.
+ the `filename` parameter is the estimated filename of the template on
+ the file system. If the template came from a database or memory this
+ can be omitted.
+
+ The return value of this method is a python code object. If the `raw`
+ parameter is `True` the return value will be a string with python
+ code equivalent to the bytecode returned otherwise. This method is
+ mainly used internally.
+
+ `defer_init` is use internally to aid the module code generator. This
+ causes the generated code to be able to import without the global
+ environment variable to be set.
+
+ .. versionadded:: 2.4
+ `defer_init` parameter added.
+ """
+ source_hint = None
+ try:
+ if isinstance(source, str):
+ source_hint = source
+ source = self._parse(source, name, filename)
+ source = self._generate(source, name, filename, defer_init=defer_init)
+ if raw:
+ return source
+ if filename is None:
+ filename = ""
+ return self._compile(source, filename)
+ except TemplateSyntaxError:
+ self.handle_exception(source=source_hint)
+
+ def compile_expression(
+ self, source: str, undefined_to_none: bool = True
+ ) -> "TemplateExpression":
+ """A handy helper method that returns a callable that accepts keyword
+ arguments that appear as variables in the expression. If called it
+ returns the result of the expression.
+
+ This is useful if applications want to use the same rules as Jinja
+ in template "configuration files" or similar situations.
+
+ Example usage:
+
+ >>> env = Environment()
+ >>> expr = env.compile_expression('foo == 42')
+ >>> expr(foo=23)
+ False
+ >>> expr(foo=42)
+ True
+
+ Per default the return value is converted to `None` if the
+ expression returns an undefined value. This can be changed
+ by setting `undefined_to_none` to `False`.
+
+ >>> env.compile_expression('var')() is None
+ True
+ >>> env.compile_expression('var', undefined_to_none=False)()
+ Undefined
+
+ .. versionadded:: 2.1
+ """
+ parser = Parser(self, source, state="variable")
+ try:
+ expr = parser.parse_expression()
+ if not parser.stream.eos:
+ raise TemplateSyntaxError(
+ "chunk after expression", parser.stream.current.lineno, None, None
+ )
+ expr.set_environment(self)
+ except TemplateSyntaxError:
+ self.handle_exception(source=source)
+
+ body = [nodes.Assign(nodes.Name("result", "store"), expr, lineno=1)]
+ template = self.from_string(nodes.Template(body, lineno=1))
+ return TemplateExpression(template, undefined_to_none)
+
+ def compile_templates(
+ self,
+ target: t.Union[str, "os.PathLike[str]"],
+ extensions: t.Optional[t.Collection[str]] = None,
+ filter_func: t.Optional[t.Callable[[str], bool]] = None,
+ zip: t.Optional[str] = "deflated",
+ log_function: t.Optional[t.Callable[[str], None]] = None,
+ ignore_errors: bool = True,
+ ) -> None:
+ """Finds all the templates the loader can find, compiles them
+ and stores them in `target`. If `zip` is `None`, instead of in a
+ zipfile, the templates will be stored in a directory.
+ By default a deflate zip algorithm is used. To switch to
+ the stored algorithm, `zip` can be set to ``'stored'``.
+
+ `extensions` and `filter_func` are passed to :meth:`list_templates`.
+ Each template returned will be compiled to the target folder or
+ zipfile.
+
+ By default template compilation errors are ignored. In case a
+ log function is provided, errors are logged. If you want template
+ syntax errors to abort the compilation you can set `ignore_errors`
+ to `False` and you will get an exception on syntax errors.
+
+ .. versionadded:: 2.4
+ """
+ from .loaders import ModuleLoader
+
+ if log_function is None:
+
+ def log_function(x: str) -> None:
+ pass
+
+ assert log_function is not None
+ assert self.loader is not None, "No loader configured."
+
+ def write_file(filename: str, data: str) -> None:
+ if zip:
+ info = ZipInfo(filename)
+ info.external_attr = 0o755 << 16
+ zip_file.writestr(info, data)
+ else:
+ with open(os.path.join(target, filename), "wb") as f:
+ f.write(data.encode("utf8"))
+
+ if zip is not None:
+ from zipfile import ZIP_DEFLATED
+ from zipfile import ZIP_STORED
+ from zipfile import ZipFile
+ from zipfile import ZipInfo
+
+ zip_file = ZipFile(
+ target, "w", dict(deflated=ZIP_DEFLATED, stored=ZIP_STORED)[zip]
+ )
+ log_function(f"Compiling into Zip archive {target!r}")
+ else:
+ if not os.path.isdir(target):
+ os.makedirs(target)
+ log_function(f"Compiling into folder {target!r}")
+
+ try:
+ for name in self.list_templates(extensions, filter_func):
+ source, filename, _ = self.loader.get_source(self, name)
+ try:
+ code = self.compile(source, name, filename, True, True)
+ except TemplateSyntaxError as e:
+ if not ignore_errors:
+ raise
+ log_function(f'Could not compile "{name}": {e}')
+ continue
+
+ filename = ModuleLoader.get_module_filename(name)
+
+ write_file(filename, code)
+ log_function(f'Compiled "{name}" as {filename}')
+ finally:
+ if zip:
+ zip_file.close()
+
+ log_function("Finished compiling templates")
+
+ def list_templates(
+ self,
+ extensions: t.Optional[t.Collection[str]] = None,
+ filter_func: t.Optional[t.Callable[[str], bool]] = None,
+ ) -> t.List[str]:
+ """Returns a list of templates for this environment. This requires
+ that the loader supports the loader's
+ :meth:`~BaseLoader.list_templates` method.
+
+ If there are other files in the template folder besides the
+ actual templates, the returned list can be filtered. There are two
+ ways: either `extensions` is set to a list of file extensions for
+ templates, or a `filter_func` can be provided which is a callable that
+ is passed a template name and should return `True` if it should end up
+ in the result list.
+
+ If the loader does not support that, a :exc:`TypeError` is raised.
+
+ .. versionadded:: 2.4
+ """
+ assert self.loader is not None, "No loader configured."
+ names = self.loader.list_templates()
+
+ if extensions is not None:
+ if filter_func is not None:
+ raise TypeError(
+ "either extensions or filter_func can be passed, but not both"
+ )
+
+ def filter_func(x: str) -> bool:
+ return "." in x and x.rsplit(".", 1)[1] in extensions
+
+ if filter_func is not None:
+ names = [name for name in names if filter_func(name)]
+
+ return names
+
+ def handle_exception(self, source: t.Optional[str] = None) -> "te.NoReturn":
+ """Exception handling helper. This is used internally to either raise
+ rewritten exceptions or return a rendered traceback for the template.
+ """
+ from .debug import rewrite_traceback_stack
+
+ raise rewrite_traceback_stack(source=source)
+
+ def join_path(self, template: str, parent: str) -> str:
+ """Join a template with the parent. By default all the lookups are
+ relative to the loader root so this method returns the `template`
+ parameter unchanged, but if the paths should be relative to the
+ parent template, this function can be used to calculate the real
+ template name.
+
+ Subclasses may override this method and implement template path
+ joining here.
+ """
+ return template
+
+ @internalcode
+ def _load_template(
+ self, name: str, globals: t.Optional[t.MutableMapping[str, t.Any]]
+ ) -> "Template":
+ if self.loader is None:
+ raise TypeError("no loader for this environment specified")
+ cache_key = (weakref.ref(self.loader), name)
+ if self.cache is not None:
+ template = self.cache.get(cache_key)
+ if template is not None and (
+ not self.auto_reload or template.is_up_to_date
+ ):
+ # template.globals is a ChainMap, modifying it will only
+ # affect the template, not the environment globals.
+ if globals:
+ template.globals.update(globals)
+
+ return template
+
+ template = self.loader.load(self, name, self.make_globals(globals))
+
+ if self.cache is not None:
+ self.cache[cache_key] = template
+ return template
+
+ @internalcode
+ def get_template(
+ self,
+ name: t.Union[str, "Template"],
+ parent: t.Optional[str] = None,
+ globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
+ ) -> "Template":
+ """Load a template by name with :attr:`loader` and return a
+ :class:`Template`. If the template does not exist a
+ :exc:`TemplateNotFound` exception is raised.
+
+ :param name: Name of the template to load. When loading
+ templates from the filesystem, "/" is used as the path
+ separator, even on Windows.
+ :param parent: The name of the parent template importing this
+ template. :meth:`join_path` can be used to implement name
+ transformations with this.
+ :param globals: Extend the environment :attr:`globals` with
+ these extra variables available for all renders of this
+ template. If the template has already been loaded and
+ cached, its globals are updated with any new items.
+
+ .. versionchanged:: 3.0
+ If a template is loaded from cache, ``globals`` will update
+ the template's globals instead of ignoring the new values.
+
+ .. versionchanged:: 2.4
+ If ``name`` is a :class:`Template` object it is returned
+ unchanged.
+ """
+ if isinstance(name, Template):
+ return name
+ if parent is not None:
+ name = self.join_path(name, parent)
+
+ return self._load_template(name, globals)
+
+ @internalcode
+ def select_template(
+ self,
+ names: t.Iterable[t.Union[str, "Template"]],
+ parent: t.Optional[str] = None,
+ globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
+ ) -> "Template":
+ """Like :meth:`get_template`, but tries loading multiple names.
+ If none of the names can be loaded a :exc:`TemplatesNotFound`
+ exception is raised.
+
+ :param names: List of template names to try loading in order.
+ :param parent: The name of the parent template importing this
+ template. :meth:`join_path` can be used to implement name
+ transformations with this.
+ :param globals: Extend the environment :attr:`globals` with
+ these extra variables available for all renders of this
+ template. If the template has already been loaded and
+ cached, its globals are updated with any new items.
+
+ .. versionchanged:: 3.0
+ If a template is loaded from cache, ``globals`` will update
+ the template's globals instead of ignoring the new values.
+
+ .. versionchanged:: 2.11
+ If ``names`` is :class:`Undefined`, an :exc:`UndefinedError`
+ is raised instead. If no templates were found and ``names``
+ contains :class:`Undefined`, the message is more helpful.
+
+ .. versionchanged:: 2.4
+ If ``names`` contains a :class:`Template` object it is
+ returned unchanged.
+
+ .. versionadded:: 2.3
+ """
+ if isinstance(names, Undefined):
+ names._fail_with_undefined_error()
+
+ if not names:
+ raise TemplatesNotFound(
+ message="Tried to select from an empty list of templates."
+ )
+
+ for name in names:
+ if isinstance(name, Template):
+ return name
+ if parent is not None:
+ name = self.join_path(name, parent)
+ try:
+ return self._load_template(name, globals)
+ except (TemplateNotFound, UndefinedError):
+ pass
+ raise TemplatesNotFound(names) # type: ignore
+
+ @internalcode
+ def get_or_select_template(
+ self,
+ template_name_or_list: t.Union[
+ str, "Template", t.List[t.Union[str, "Template"]]
+ ],
+ parent: t.Optional[str] = None,
+ globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
+ ) -> "Template":
+ """Use :meth:`select_template` if an iterable of template names
+ is given, or :meth:`get_template` if one name is given.
+
+ .. versionadded:: 2.3
+ """
+ if isinstance(template_name_or_list, (str, Undefined)):
+ return self.get_template(template_name_or_list, parent, globals)
+ elif isinstance(template_name_or_list, Template):
+ return template_name_or_list
+ return self.select_template(template_name_or_list, parent, globals)
+
+ def from_string(
+ self,
+ source: t.Union[str, nodes.Template],
+ globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
+ template_class: t.Optional[t.Type["Template"]] = None,
+ ) -> "Template":
+ """Load a template from a source string without using
+ :attr:`loader`.
+
+ :param source: Jinja source to compile into a template.
+ :param globals: Extend the environment :attr:`globals` with
+ these extra variables available for all renders of this
+ template. If the template has already been loaded and
+ cached, its globals are updated with any new items.
+ :param template_class: Return an instance of this
+ :class:`Template` class.
+ """
+ gs = self.make_globals(globals)
+ cls = template_class or self.template_class
+ return cls.from_code(self, self.compile(source), gs, None)
+
+ def make_globals(
+ self, d: t.Optional[t.MutableMapping[str, t.Any]]
+ ) -> t.MutableMapping[str, t.Any]:
+ """Make the globals map for a template. Any given template
+ globals overlay the environment :attr:`globals`.
+
+ Returns a :class:`collections.ChainMap`. This allows any changes
+ to a template's globals to only affect that template, while
+ changes to the environment's globals are still reflected.
+ However, avoid modifying any globals after a template is loaded.
+
+ :param d: Dict of template-specific globals.
+
+ .. versionchanged:: 3.0
+ Use :class:`collections.ChainMap` to always prevent mutating
+ environment globals.
+ """
+ if d is None:
+ d = {}
+
+ return ChainMap(d, self.globals)
+
+
+class Template:
+ """A compiled template that can be rendered.
+
+ Use the methods on :class:`Environment` to create or load templates.
+ The environment is used to configure how templates are compiled and
+ behave.
+
+ It is also possible to create a template object directly. This is
+ not usually recommended. The constructor takes most of the same
+ arguments as :class:`Environment`. All templates created with the
+ same environment arguments share the same ephemeral ``Environment``
+ instance behind the scenes.
+
+ A template object should be considered immutable. Modifications on
+ the object are not supported.
+ """
+
+ #: Type of environment to create when creating a template directly
+ #: rather than through an existing environment.
+ environment_class: t.Type[Environment] = Environment
+
+ environment: Environment
+ globals: t.MutableMapping[str, t.Any]
+ name: t.Optional[str]
+ filename: t.Optional[str]
+ blocks: t.Dict[str, t.Callable[[Context], t.Iterator[str]]]
+ root_render_func: t.Callable[[Context], t.Iterator[str]]
+ _module: t.Optional["TemplateModule"]
+ _debug_info: str
+ _uptodate: t.Optional[t.Callable[[], bool]]
+
+ def __new__(
+ cls,
+ source: t.Union[str, nodes.Template],
+ block_start_string: str = BLOCK_START_STRING,
+ block_end_string: str = BLOCK_END_STRING,
+ variable_start_string: str = VARIABLE_START_STRING,
+ variable_end_string: str = VARIABLE_END_STRING,
+ comment_start_string: str = COMMENT_START_STRING,
+ comment_end_string: str = COMMENT_END_STRING,
+ line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX,
+ line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX,
+ trim_blocks: bool = TRIM_BLOCKS,
+ lstrip_blocks: bool = LSTRIP_BLOCKS,
+ newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE,
+ keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE,
+ extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (),
+ optimized: bool = True,
+ undefined: t.Type[Undefined] = Undefined,
+ finalize: t.Optional[t.Callable[..., t.Any]] = None,
+ autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False,
+ enable_async: bool = False,
+ ) -> t.Any: # it returns a `Template`, but this breaks the sphinx build...
+ env = get_spontaneous_environment(
+ cls.environment_class, # type: ignore
+ block_start_string,
+ block_end_string,
+ variable_start_string,
+ variable_end_string,
+ comment_start_string,
+ comment_end_string,
+ line_statement_prefix,
+ line_comment_prefix,
+ trim_blocks,
+ lstrip_blocks,
+ newline_sequence,
+ keep_trailing_newline,
+ frozenset(extensions),
+ optimized,
+ undefined, # type: ignore
+ finalize,
+ autoescape,
+ None,
+ 0,
+ False,
+ None,
+ enable_async,
+ )
+ return env.from_string(source, template_class=cls)
+
+ @classmethod
+ def from_code(
+ cls,
+ environment: Environment,
+ code: CodeType,
+ globals: t.MutableMapping[str, t.Any],
+ uptodate: t.Optional[t.Callable[[], bool]] = None,
+ ) -> "Template":
+ """Creates a template object from compiled code and the globals. This
+ is used by the loaders and environment to create a template object.
+ """
+ namespace = {"environment": environment, "__file__": code.co_filename}
+ exec(code, namespace)
+ rv = cls._from_namespace(environment, namespace, globals)
+ rv._uptodate = uptodate
+ return rv
+
+ @classmethod
+ def from_module_dict(
+ cls,
+ environment: Environment,
+ module_dict: t.MutableMapping[str, t.Any],
+ globals: t.MutableMapping[str, t.Any],
+ ) -> "Template":
+ """Creates a template object from a module. This is used by the
+ module loader to create a template object.
+
+ .. versionadded:: 2.4
+ """
+ return cls._from_namespace(environment, module_dict, globals)
+
+ @classmethod
+ def _from_namespace(
+ cls,
+ environment: Environment,
+ namespace: t.MutableMapping[str, t.Any],
+ globals: t.MutableMapping[str, t.Any],
+ ) -> "Template":
+ t: Template = object.__new__(cls)
+ t.environment = environment
+ t.globals = globals
+ t.name = namespace["name"]
+ t.filename = namespace["__file__"]
+ t.blocks = namespace["blocks"]
+
+ # render function and module
+ t.root_render_func = namespace["root"]
+ t._module = None
+
+ # debug and loader helpers
+ t._debug_info = namespace["debug_info"]
+ t._uptodate = None
+
+ # store the reference
+ namespace["environment"] = environment
+ namespace["__jinja_template__"] = t
+
+ return t
+
+ def render(self, *args: t.Any, **kwargs: t.Any) -> str:
+ """This method accepts the same arguments as the `dict` constructor:
+ A dict, a dict subclass or some keyword arguments. If no arguments
+ are given the context will be empty. These two calls do the same::
+
+ template.render(knights='that say nih')
+ template.render({'knights': 'that say nih'})
+
+ This will return the rendered template as a string.
+ """
+ if self.environment.is_async:
+ import asyncio
+
+ return asyncio.run(self.render_async(*args, **kwargs))
+
+ ctx = self.new_context(dict(*args, **kwargs))
+
+ try:
+ return self.environment.concat(self.root_render_func(ctx)) # type: ignore
+ except Exception:
+ self.environment.handle_exception()
+
+ async def render_async(self, *args: t.Any, **kwargs: t.Any) -> str:
+ """This works similar to :meth:`render` but returns a coroutine
+ that when awaited returns the entire rendered template string. This
+ requires the async feature to be enabled.
+
+ Example usage::
+
+ await template.render_async(knights='that say nih; asynchronously')
+ """
+ if not self.environment.is_async:
+ raise RuntimeError(
+ "The environment was not created with async mode enabled."
+ )
+
+ ctx = self.new_context(dict(*args, **kwargs))
+
+ try:
+ return self.environment.concat( # type: ignore
+ [n async for n in self.root_render_func(ctx)] # type: ignore
+ )
+ except Exception:
+ return self.environment.handle_exception()
+
+ def stream(self, *args: t.Any, **kwargs: t.Any) -> "TemplateStream":
+ """Works exactly like :meth:`generate` but returns a
+ :class:`TemplateStream`.
+ """
+ return TemplateStream(self.generate(*args, **kwargs))
+
+ def generate(self, *args: t.Any, **kwargs: t.Any) -> t.Iterator[str]:
+ """For very large templates it can be useful to not render the whole
+ template at once but evaluate each statement after another and yield
+ piece for piece. This method basically does exactly that and returns
+ a generator that yields one item after another as strings.
+
+ It accepts the same arguments as :meth:`render`.
+ """
+ if self.environment.is_async:
+ import asyncio
+
+ async def to_list() -> t.List[str]:
+ return [x async for x in self.generate_async(*args, **kwargs)]
+
+ yield from asyncio.run(to_list())
+ return
+
+ ctx = self.new_context(dict(*args, **kwargs))
+
+ try:
+ yield from self.root_render_func(ctx)
+ except Exception:
+ yield self.environment.handle_exception()
+
+ async def generate_async(
+ self, *args: t.Any, **kwargs: t.Any
+ ) -> t.AsyncGenerator[str, object]:
+ """An async version of :meth:`generate`. Works very similarly but
+ returns an async iterator instead.
+ """
+ if not self.environment.is_async:
+ raise RuntimeError(
+ "The environment was not created with async mode enabled."
+ )
+
+ ctx = self.new_context(dict(*args, **kwargs))
+
+ try:
+ agen = self.root_render_func(ctx)
+ try:
+ async for event in agen: # type: ignore
+ yield event
+ finally:
+ # we can't use async with aclosing(...) because that's only
+ # in 3.10+
+ await agen.aclose() # type: ignore
+ except Exception:
+ yield self.environment.handle_exception()
+
+ def new_context(
+ self,
+ vars: t.Optional[t.Dict[str, t.Any]] = None,
+ shared: bool = False,
+ locals: t.Optional[t.Mapping[str, t.Any]] = None,
+ ) -> Context:
+ """Create a new :class:`Context` for this template. The vars
+ provided will be passed to the template. Per default the globals
+ are added to the context. If shared is set to `True` the data
+ is passed as is to the context without adding the globals.
+
+ `locals` can be a dict of local variables for internal usage.
+ """
+ return new_context(
+ self.environment, self.name, self.blocks, vars, shared, self.globals, locals
+ )
+
+ def make_module(
+ self,
+ vars: t.Optional[t.Dict[str, t.Any]] = None,
+ shared: bool = False,
+ locals: t.Optional[t.Mapping[str, t.Any]] = None,
+ ) -> "TemplateModule":
+ """This method works like the :attr:`module` attribute when called
+ without arguments but it will evaluate the template on every call
+ rather than caching it. It's also possible to provide
+ a dict which is then used as context. The arguments are the same
+ as for the :meth:`new_context` method.
+ """
+ ctx = self.new_context(vars, shared, locals)
+ return TemplateModule(self, ctx)
+
+ async def make_module_async(
+ self,
+ vars: t.Optional[t.Dict[str, t.Any]] = None,
+ shared: bool = False,
+ locals: t.Optional[t.Mapping[str, t.Any]] = None,
+ ) -> "TemplateModule":
+ """As template module creation can invoke template code for
+ asynchronous executions this method must be used instead of the
+ normal :meth:`make_module` one. Likewise the module attribute
+ becomes unavailable in async mode.
+ """
+ ctx = self.new_context(vars, shared, locals)
+ return TemplateModule(
+ self,
+ ctx,
+ [x async for x in self.root_render_func(ctx)], # type: ignore
+ )
+
+ @internalcode
+ def _get_default_module(self, ctx: t.Optional[Context] = None) -> "TemplateModule":
+ """If a context is passed in, this means that the template was
+ imported. Imported templates have access to the current
+ template's globals by default, but they can only be accessed via
+ the context during runtime.
+
+ If there are new globals, we need to create a new module because
+ the cached module is already rendered and will not have access
+ to globals from the current context. This new module is not
+ cached because the template can be imported elsewhere, and it
+ should have access to only the current template's globals.
+ """
+ if self.environment.is_async:
+ raise RuntimeError("Module is not available in async mode.")
+
+ if ctx is not None:
+ keys = ctx.globals_keys - self.globals.keys()
+
+ if keys:
+ return self.make_module({k: ctx.parent[k] for k in keys})
+
+ if self._module is None:
+ self._module = self.make_module()
+
+ return self._module
+
+ async def _get_default_module_async(
+ self, ctx: t.Optional[Context] = None
+ ) -> "TemplateModule":
+ if ctx is not None:
+ keys = ctx.globals_keys - self.globals.keys()
+
+ if keys:
+ return await self.make_module_async({k: ctx.parent[k] for k in keys})
+
+ if self._module is None:
+ self._module = await self.make_module_async()
+
+ return self._module
+
+ @property
+ def module(self) -> "TemplateModule":
+ """The template as module. This is used for imports in the
+ template runtime but is also useful if one wants to access
+ exported template variables from the Python layer:
+
+ >>> t = Template('{% macro foo() %}42{% endmacro %}23')
+ >>> str(t.module)
+ '23'
+ >>> t.module.foo() == u'42'
+ True
+
+ This attribute is not available if async mode is enabled.
+ """
+ return self._get_default_module()
+
+ def get_corresponding_lineno(self, lineno: int) -> int:
+ """Return the source line number of a line number in the
+ generated bytecode as they are not in sync.
+ """
+ for template_line, code_line in reversed(self.debug_info):
+ if code_line <= lineno:
+ return template_line
+ return 1
+
+ @property
+ def is_up_to_date(self) -> bool:
+ """If this variable is `False` there is a newer version available."""
+ if self._uptodate is None:
+ return True
+ return self._uptodate()
+
+ @property
+ def debug_info(self) -> t.List[t.Tuple[int, int]]:
+ """The debug info mapping."""
+ if self._debug_info:
+ return [
+ tuple(map(int, x.split("="))) # type: ignore
+ for x in self._debug_info.split("&")
+ ]
+
+ return []
+
+ def __repr__(self) -> str:
+ if self.name is None:
+ name = f"memory:{id(self):x}"
+ else:
+ name = repr(self.name)
+ return f"<{type(self).__name__} {name}>"
+
+
+class TemplateModule:
+ """Represents an imported template. All the exported names of the
+ template are available as attributes on this object. Additionally
+ converting it into a string renders the contents.
+ """
+
+ def __init__(
+ self,
+ template: Template,
+ context: Context,
+ body_stream: t.Optional[t.Iterable[str]] = None,
+ ) -> None:
+ if body_stream is None:
+ if context.environment.is_async:
+ raise RuntimeError(
+ "Async mode requires a body stream to be passed to"
+ " a template module. Use the async methods of the"
+ " API you are using."
+ )
+
+ body_stream = list(template.root_render_func(context))
+
+ self._body_stream = body_stream
+ self.__dict__.update(context.get_exported())
+ self.__name__ = template.name
+
+ def __html__(self) -> Markup:
+ return Markup(concat(self._body_stream))
+
+ def __str__(self) -> str:
+ return concat(self._body_stream)
+
+ def __repr__(self) -> str:
+ if self.__name__ is None:
+ name = f"memory:{id(self):x}"
+ else:
+ name = repr(self.__name__)
+ return f"<{type(self).__name__} {name}>"
+
+
+class TemplateExpression:
+ """The :meth:`jinja2.Environment.compile_expression` method returns an
+ instance of this object. It encapsulates the expression-like access
+ to the template with an expression it wraps.
+ """
+
+ def __init__(self, template: Template, undefined_to_none: bool) -> None:
+ self._template = template
+ self._undefined_to_none = undefined_to_none
+
+ def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Optional[t.Any]:
+ context = self._template.new_context(dict(*args, **kwargs))
+ consume(self._template.root_render_func(context))
+ rv = context.vars["result"]
+ if self._undefined_to_none and isinstance(rv, Undefined):
+ rv = None
+ return rv
+
+
+class TemplateStream:
+ """A template stream works pretty much like an ordinary python generator
+ but it can buffer multiple items to reduce the number of total iterations.
+ Per default the output is unbuffered which means that for every unbuffered
+ instruction in the template one string is yielded.
+
+ If buffering is enabled with a buffer size of 5, five items are combined
+ into a new string. This is mainly useful if you are streaming
+ big templates to a client via WSGI which flushes after each iteration.
+ """
+
+ def __init__(self, gen: t.Iterator[str]) -> None:
+ self._gen = gen
+ self.disable_buffering()
+
+ def dump(
+ self,
+ fp: t.Union[str, t.IO[bytes]],
+ encoding: t.Optional[str] = None,
+ errors: t.Optional[str] = "strict",
+ ) -> None:
+ """Dump the complete stream into a file or file-like object.
+ Per default strings are written, if you want to encode
+ before writing specify an `encoding`.
+
+ Example usage::
+
+ Template('Hello {{ name }}!').stream(name='foo').dump('hello.html')
+ """
+ close = False
+
+ if isinstance(fp, str):
+ if encoding is None:
+ encoding = "utf-8"
+
+ real_fp: t.IO[bytes] = open(fp, "wb")
+ close = True
+ else:
+ real_fp = fp
+
+ try:
+ if encoding is not None:
+ iterable = (x.encode(encoding, errors) for x in self) # type: ignore
+ else:
+ iterable = self # type: ignore
+
+ if hasattr(real_fp, "writelines"):
+ real_fp.writelines(iterable)
+ else:
+ for item in iterable:
+ real_fp.write(item)
+ finally:
+ if close:
+ real_fp.close()
+
+ def disable_buffering(self) -> None:
+ """Disable the output buffering."""
+ self._next = partial(next, self._gen)
+ self.buffered = False
+
+ def _buffered_generator(self, size: int) -> t.Iterator[str]:
+ buf: t.List[str] = []
+ c_size = 0
+ push = buf.append
+
+ while True:
+ try:
+ while c_size < size:
+ c = next(self._gen)
+ push(c)
+ if c:
+ c_size += 1
+ except StopIteration:
+ if not c_size:
+ return
+ yield concat(buf)
+ del buf[:]
+ c_size = 0
+
+ def enable_buffering(self, size: int = 5) -> None:
+ """Enable buffering. Buffer `size` items before yielding them."""
+ if size <= 1:
+ raise ValueError("buffer size too small")
+
+ self.buffered = True
+ self._next = partial(next, self._buffered_generator(size))
+
+ def __iter__(self) -> "TemplateStream":
+ return self
+
+ def __next__(self) -> str:
+ return self._next() # type: ignore
+
+
+# hook in default template class. if anyone reads this comment: ignore that
+# it's possible to use custom templates ;-)
+Environment.template_class = Template
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/exceptions.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/exceptions.py
new file mode 100644
index 0000000..082ebe8
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/exceptions.py
@@ -0,0 +1,166 @@
+import typing as t
+
+if t.TYPE_CHECKING:
+ from .runtime import Undefined
+
+
+class TemplateError(Exception):
+ """Baseclass for all template errors."""
+
+ def __init__(self, message: t.Optional[str] = None) -> None:
+ super().__init__(message)
+
+ @property
+ def message(self) -> t.Optional[str]:
+ return self.args[0] if self.args else None
+
+
+class TemplateNotFound(IOError, LookupError, TemplateError):
+ """Raised if a template does not exist.
+
+ .. versionchanged:: 2.11
+ If the given name is :class:`Undefined` and no message was
+ provided, an :exc:`UndefinedError` is raised.
+ """
+
+ # Silence the Python warning about message being deprecated since
+ # it's not valid here.
+ message: t.Optional[str] = None
+
+ def __init__(
+ self,
+ name: t.Optional[t.Union[str, "Undefined"]],
+ message: t.Optional[str] = None,
+ ) -> None:
+ IOError.__init__(self, name)
+
+ if message is None:
+ from .runtime import Undefined
+
+ if isinstance(name, Undefined):
+ name._fail_with_undefined_error()
+
+ message = name
+
+ self.message = message
+ self.name = name
+ self.templates = [name]
+
+ def __str__(self) -> str:
+ return str(self.message)
+
+
+class TemplatesNotFound(TemplateNotFound):
+ """Like :class:`TemplateNotFound` but raised if multiple templates
+ are selected. This is a subclass of :class:`TemplateNotFound`
+ exception, so just catching the base exception will catch both.
+
+ .. versionchanged:: 2.11
+ If a name in the list of names is :class:`Undefined`, a message
+ about it being undefined is shown rather than the empty string.
+
+ .. versionadded:: 2.2
+ """
+
+ def __init__(
+ self,
+ names: t.Sequence[t.Union[str, "Undefined"]] = (),
+ message: t.Optional[str] = None,
+ ) -> None:
+ if message is None:
+ from .runtime import Undefined
+
+ parts = []
+
+ for name in names:
+ if isinstance(name, Undefined):
+ parts.append(name._undefined_message)
+ else:
+ parts.append(name)
+
+ parts_str = ", ".join(map(str, parts))
+ message = f"none of the templates given were found: {parts_str}"
+
+ super().__init__(names[-1] if names else None, message)
+ self.templates = list(names)
+
+
+class TemplateSyntaxError(TemplateError):
+ """Raised to tell the user that there is a problem with the template."""
+
+ def __init__(
+ self,
+ message: str,
+ lineno: int,
+ name: t.Optional[str] = None,
+ filename: t.Optional[str] = None,
+ ) -> None:
+ super().__init__(message)
+ self.lineno = lineno
+ self.name = name
+ self.filename = filename
+ self.source: t.Optional[str] = None
+
+ # this is set to True if the debug.translate_syntax_error
+ # function translated the syntax error into a new traceback
+ self.translated = False
+
+ def __str__(self) -> str:
+ # for translated errors we only return the message
+ if self.translated:
+ return t.cast(str, self.message)
+
+ # otherwise attach some stuff
+ location = f"line {self.lineno}"
+ name = self.filename or self.name
+ if name:
+ location = f'File "{name}", {location}'
+ lines = [t.cast(str, self.message), " " + location]
+
+ # if the source is set, add the line to the output
+ if self.source is not None:
+ try:
+ line = self.source.splitlines()[self.lineno - 1]
+ except IndexError:
+ pass
+ else:
+ lines.append(" " + line.strip())
+
+ return "\n".join(lines)
+
+ def __reduce__(self): # type: ignore
+ # https://bugs.python.org/issue1692335 Exceptions that take
+ # multiple required arguments have problems with pickling.
+ # Without this, raises TypeError: __init__() missing 1 required
+ # positional argument: 'lineno'
+ return self.__class__, (self.message, self.lineno, self.name, self.filename)
+
+
+class TemplateAssertionError(TemplateSyntaxError):
+ """Like a template syntax error, but covers cases where something in the
+ template caused an error at compile time that wasn't necessarily caused
+ by a syntax error. However it's a direct subclass of
+ :exc:`TemplateSyntaxError` and has the same attributes.
+ """
+
+
+class TemplateRuntimeError(TemplateError):
+ """A generic runtime error in the template engine. Under some situations
+ Jinja may raise this exception.
+ """
+
+
+class UndefinedError(TemplateRuntimeError):
+ """Raised if a template tries to operate on :class:`Undefined`."""
+
+
+class SecurityError(TemplateRuntimeError):
+ """Raised if a template tries to do something insecure if the
+ sandbox is enabled.
+ """
+
+
+class FilterArgumentError(TemplateRuntimeError):
+ """This error is raised if a filter was called with inappropriate
+ arguments
+ """
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/ext.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/ext.py
new file mode 100644
index 0000000..c7af8d4
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/ext.py
@@ -0,0 +1,870 @@
+"""Extension API for adding custom tags and behavior."""
+
+import pprint
+import re
+import typing as t
+
+from markupsafe import Markup
+
+from . import defaults
+from . import nodes
+from .environment import Environment
+from .exceptions import TemplateAssertionError
+from .exceptions import TemplateSyntaxError
+from .runtime import concat # type: ignore
+from .runtime import Context
+from .runtime import Undefined
+from .utils import import_string
+from .utils import pass_context
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .lexer import Token
+ from .lexer import TokenStream
+ from .parser import Parser
+
+ class _TranslationsBasic(te.Protocol):
+ def gettext(self, message: str) -> str: ...
+
+ def ngettext(self, singular: str, plural: str, n: int) -> str:
+ pass
+
+ class _TranslationsContext(_TranslationsBasic):
+ def pgettext(self, context: str, message: str) -> str: ...
+
+ def npgettext(
+ self, context: str, singular: str, plural: str, n: int
+ ) -> str: ...
+
+ _SupportedTranslations = t.Union[_TranslationsBasic, _TranslationsContext]
+
+
+# I18N functions available in Jinja templates. If the I18N library
+# provides ugettext, it will be assigned to gettext.
+GETTEXT_FUNCTIONS: t.Tuple[str, ...] = (
+ "_",
+ "gettext",
+ "ngettext",
+ "pgettext",
+ "npgettext",
+)
+_ws_re = re.compile(r"\s*\n\s*")
+
+
+class Extension:
+ """Extensions can be used to add extra functionality to the Jinja template
+ system at the parser level. Custom extensions are bound to an environment
+ but may not store environment specific data on `self`. The reason for
+ this is that an extension can be bound to another environment (for
+ overlays) by creating a copy and reassigning the `environment` attribute.
+
+ As extensions are created by the environment they cannot accept any
+ arguments for configuration. One may want to work around that by using
+ a factory function, but that is not possible as extensions are identified
+ by their import name. The correct way to configure the extension is
+ storing the configuration values on the environment. Because this way the
+ environment ends up acting as central configuration storage the
+ attributes may clash which is why extensions have to ensure that the names
+ they choose for configuration are not too generic. ``prefix`` for example
+ is a terrible name, ``fragment_cache_prefix`` on the other hand is a good
+ name as includes the name of the extension (fragment cache).
+ """
+
+ identifier: t.ClassVar[str]
+
+ def __init_subclass__(cls) -> None:
+ cls.identifier = f"{cls.__module__}.{cls.__name__}"
+
+ #: if this extension parses this is the list of tags it's listening to.
+ tags: t.Set[str] = set()
+
+ #: the priority of that extension. This is especially useful for
+ #: extensions that preprocess values. A lower value means higher
+ #: priority.
+ #:
+ #: .. versionadded:: 2.4
+ priority = 100
+
+ def __init__(self, environment: Environment) -> None:
+ self.environment = environment
+
+ def bind(self, environment: Environment) -> "te.Self":
+ """Create a copy of this extension bound to another environment."""
+ rv = object.__new__(self.__class__)
+ rv.__dict__.update(self.__dict__)
+ rv.environment = environment
+ return rv
+
+ def preprocess(
+ self, source: str, name: t.Optional[str], filename: t.Optional[str] = None
+ ) -> str:
+ """This method is called before the actual lexing and can be used to
+ preprocess the source. The `filename` is optional. The return value
+ must be the preprocessed source.
+ """
+ return source
+
+ def filter_stream(
+ self, stream: "TokenStream"
+ ) -> t.Union["TokenStream", t.Iterable["Token"]]:
+ """It's passed a :class:`~jinja2.lexer.TokenStream` that can be used
+ to filter tokens returned. This method has to return an iterable of
+ :class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
+ :class:`~jinja2.lexer.TokenStream`.
+ """
+ return stream
+
+ def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]:
+ """If any of the :attr:`tags` matched this method is called with the
+ parser as first argument. The token the parser stream is pointing at
+ is the name token that matched. This method has to return one or a
+ list of multiple nodes.
+ """
+ raise NotImplementedError()
+
+ def attr(
+ self, name: str, lineno: t.Optional[int] = None
+ ) -> nodes.ExtensionAttribute:
+ """Return an attribute node for the current extension. This is useful
+ to pass constants on extensions to generated template code.
+
+ ::
+
+ self.attr('_my_attribute', lineno=lineno)
+ """
+ return nodes.ExtensionAttribute(self.identifier, name, lineno=lineno)
+
+ def call_method(
+ self,
+ name: str,
+ args: t.Optional[t.List[nodes.Expr]] = None,
+ kwargs: t.Optional[t.List[nodes.Keyword]] = None,
+ dyn_args: t.Optional[nodes.Expr] = None,
+ dyn_kwargs: t.Optional[nodes.Expr] = None,
+ lineno: t.Optional[int] = None,
+ ) -> nodes.Call:
+ """Call a method of the extension. This is a shortcut for
+ :meth:`attr` + :class:`jinja2.nodes.Call`.
+ """
+ if args is None:
+ args = []
+ if kwargs is None:
+ kwargs = []
+ return nodes.Call(
+ self.attr(name, lineno=lineno),
+ args,
+ kwargs,
+ dyn_args,
+ dyn_kwargs,
+ lineno=lineno,
+ )
+
+
+@pass_context
+def _gettext_alias(
+ __context: Context, *args: t.Any, **kwargs: t.Any
+) -> t.Union[t.Any, Undefined]:
+ return __context.call(__context.resolve("gettext"), *args, **kwargs)
+
+
+def _make_new_gettext(func: t.Callable[[str], str]) -> t.Callable[..., str]:
+ @pass_context
+ def gettext(__context: Context, __string: str, **variables: t.Any) -> str:
+ rv = __context.call(func, __string)
+ if __context.eval_ctx.autoescape:
+ rv = Markup(rv)
+ # Always treat as a format string, even if there are no
+ # variables. This makes translation strings more consistent
+ # and predictable. This requires escaping
+ return rv % variables # type: ignore
+
+ return gettext
+
+
+def _make_new_ngettext(func: t.Callable[[str, str, int], str]) -> t.Callable[..., str]:
+ @pass_context
+ def ngettext(
+ __context: Context,
+ __singular: str,
+ __plural: str,
+ __num: int,
+ **variables: t.Any,
+ ) -> str:
+ variables.setdefault("num", __num)
+ rv = __context.call(func, __singular, __plural, __num)
+ if __context.eval_ctx.autoescape:
+ rv = Markup(rv)
+ # Always treat as a format string, see gettext comment above.
+ return rv % variables # type: ignore
+
+ return ngettext
+
+
+def _make_new_pgettext(func: t.Callable[[str, str], str]) -> t.Callable[..., str]:
+ @pass_context
+ def pgettext(
+ __context: Context, __string_ctx: str, __string: str, **variables: t.Any
+ ) -> str:
+ variables.setdefault("context", __string_ctx)
+ rv = __context.call(func, __string_ctx, __string)
+
+ if __context.eval_ctx.autoescape:
+ rv = Markup(rv)
+
+ # Always treat as a format string, see gettext comment above.
+ return rv % variables # type: ignore
+
+ return pgettext
+
+
+def _make_new_npgettext(
+ func: t.Callable[[str, str, str, int], str],
+) -> t.Callable[..., str]:
+ @pass_context
+ def npgettext(
+ __context: Context,
+ __string_ctx: str,
+ __singular: str,
+ __plural: str,
+ __num: int,
+ **variables: t.Any,
+ ) -> str:
+ variables.setdefault("context", __string_ctx)
+ variables.setdefault("num", __num)
+ rv = __context.call(func, __string_ctx, __singular, __plural, __num)
+
+ if __context.eval_ctx.autoescape:
+ rv = Markup(rv)
+
+ # Always treat as a format string, see gettext comment above.
+ return rv % variables # type: ignore
+
+ return npgettext
+
+
+class InternationalizationExtension(Extension):
+ """This extension adds gettext support to Jinja."""
+
+ tags = {"trans"}
+
+ # TODO: the i18n extension is currently reevaluating values in a few
+ # situations. Take this example:
+ # {% trans count=something() %}{{ count }} foo{% pluralize
+ # %}{{ count }} fooss{% endtrans %}
+ # something is called twice here. One time for the gettext value and
+ # the other time for the n-parameter of the ngettext function.
+
+ def __init__(self, environment: Environment) -> None:
+ super().__init__(environment)
+ environment.globals["_"] = _gettext_alias
+ environment.extend(
+ install_gettext_translations=self._install,
+ install_null_translations=self._install_null,
+ install_gettext_callables=self._install_callables,
+ uninstall_gettext_translations=self._uninstall,
+ extract_translations=self._extract,
+ newstyle_gettext=False,
+ )
+
+ def _install(
+ self, translations: "_SupportedTranslations", newstyle: t.Optional[bool] = None
+ ) -> None:
+ # ugettext and ungettext are preferred in case the I18N library
+ # is providing compatibility with older Python versions.
+ gettext = getattr(translations, "ugettext", None)
+ if gettext is None:
+ gettext = translations.gettext
+ ngettext = getattr(translations, "ungettext", None)
+ if ngettext is None:
+ ngettext = translations.ngettext
+
+ pgettext = getattr(translations, "pgettext", None)
+ npgettext = getattr(translations, "npgettext", None)
+ self._install_callables(
+ gettext, ngettext, newstyle=newstyle, pgettext=pgettext, npgettext=npgettext
+ )
+
+ def _install_null(self, newstyle: t.Optional[bool] = None) -> None:
+ import gettext
+
+ translations = gettext.NullTranslations()
+
+ if hasattr(translations, "pgettext"):
+ # Python < 3.8
+ pgettext = translations.pgettext
+ else:
+
+ def pgettext(c: str, s: str) -> str: # type: ignore[misc]
+ return s
+
+ if hasattr(translations, "npgettext"):
+ npgettext = translations.npgettext
+ else:
+
+ def npgettext(c: str, s: str, p: str, n: int) -> str: # type: ignore[misc]
+ return s if n == 1 else p
+
+ self._install_callables(
+ gettext=translations.gettext,
+ ngettext=translations.ngettext,
+ newstyle=newstyle,
+ pgettext=pgettext,
+ npgettext=npgettext,
+ )
+
+ def _install_callables(
+ self,
+ gettext: t.Callable[[str], str],
+ ngettext: t.Callable[[str, str, int], str],
+ newstyle: t.Optional[bool] = None,
+ pgettext: t.Optional[t.Callable[[str, str], str]] = None,
+ npgettext: t.Optional[t.Callable[[str, str, str, int], str]] = None,
+ ) -> None:
+ if newstyle is not None:
+ self.environment.newstyle_gettext = newstyle # type: ignore
+ if self.environment.newstyle_gettext: # type: ignore
+ gettext = _make_new_gettext(gettext)
+ ngettext = _make_new_ngettext(ngettext)
+
+ if pgettext is not None:
+ pgettext = _make_new_pgettext(pgettext)
+
+ if npgettext is not None:
+ npgettext = _make_new_npgettext(npgettext)
+
+ self.environment.globals.update(
+ gettext=gettext, ngettext=ngettext, pgettext=pgettext, npgettext=npgettext
+ )
+
+ def _uninstall(self, translations: "_SupportedTranslations") -> None:
+ for key in ("gettext", "ngettext", "pgettext", "npgettext"):
+ self.environment.globals.pop(key, None)
+
+ def _extract(
+ self,
+ source: t.Union[str, nodes.Template],
+ gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS,
+ ) -> t.Iterator[
+ t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]]
+ ]:
+ if isinstance(source, str):
+ source = self.environment.parse(source)
+ return extract_from_ast(source, gettext_functions)
+
+ def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]:
+ """Parse a translatable tag."""
+ lineno = next(parser.stream).lineno
+
+ context = None
+ context_token = parser.stream.next_if("string")
+
+ if context_token is not None:
+ context = context_token.value
+
+ # find all the variables referenced. Additionally a variable can be
+ # defined in the body of the trans block too, but this is checked at
+ # a later state.
+ plural_expr: t.Optional[nodes.Expr] = None
+ plural_expr_assignment: t.Optional[nodes.Assign] = None
+ num_called_num = False
+ variables: t.Dict[str, nodes.Expr] = {}
+ trimmed = None
+ while parser.stream.current.type != "block_end":
+ if variables:
+ parser.stream.expect("comma")
+
+ # skip colon for python compatibility
+ if parser.stream.skip_if("colon"):
+ break
+
+ token = parser.stream.expect("name")
+ if token.value in variables:
+ parser.fail(
+ f"translatable variable {token.value!r} defined twice.",
+ token.lineno,
+ exc=TemplateAssertionError,
+ )
+
+ # expressions
+ if parser.stream.current.type == "assign":
+ next(parser.stream)
+ variables[token.value] = var = parser.parse_expression()
+ elif trimmed is None and token.value in ("trimmed", "notrimmed"):
+ trimmed = token.value == "trimmed"
+ continue
+ else:
+ variables[token.value] = var = nodes.Name(token.value, "load")
+
+ if plural_expr is None:
+ if isinstance(var, nodes.Call):
+ plural_expr = nodes.Name("_trans", "load")
+ variables[token.value] = plural_expr
+ plural_expr_assignment = nodes.Assign(
+ nodes.Name("_trans", "store"), var
+ )
+ else:
+ plural_expr = var
+ num_called_num = token.value == "num"
+
+ parser.stream.expect("block_end")
+
+ plural = None
+ have_plural = False
+ referenced = set()
+
+ # now parse until endtrans or pluralize
+ singular_names, singular = self._parse_block(parser, True)
+ if singular_names:
+ referenced.update(singular_names)
+ if plural_expr is None:
+ plural_expr = nodes.Name(singular_names[0], "load")
+ num_called_num = singular_names[0] == "num"
+
+ # if we have a pluralize block, we parse that too
+ if parser.stream.current.test("name:pluralize"):
+ have_plural = True
+ next(parser.stream)
+ if parser.stream.current.type != "block_end":
+ token = parser.stream.expect("name")
+ if token.value not in variables:
+ parser.fail(
+ f"unknown variable {token.value!r} for pluralization",
+ token.lineno,
+ exc=TemplateAssertionError,
+ )
+ plural_expr = variables[token.value]
+ num_called_num = token.value == "num"
+ parser.stream.expect("block_end")
+ plural_names, plural = self._parse_block(parser, False)
+ next(parser.stream)
+ referenced.update(plural_names)
+ else:
+ next(parser.stream)
+
+ # register free names as simple name expressions
+ for name in referenced:
+ if name not in variables:
+ variables[name] = nodes.Name(name, "load")
+
+ if not have_plural:
+ plural_expr = None
+ elif plural_expr is None:
+ parser.fail("pluralize without variables", lineno)
+
+ if trimmed is None:
+ trimmed = self.environment.policies["ext.i18n.trimmed"]
+ if trimmed:
+ singular = self._trim_whitespace(singular)
+ if plural:
+ plural = self._trim_whitespace(plural)
+
+ node = self._make_node(
+ singular,
+ plural,
+ context,
+ variables,
+ plural_expr,
+ bool(referenced),
+ num_called_num and have_plural,
+ )
+ node.set_lineno(lineno)
+ if plural_expr_assignment is not None:
+ return [plural_expr_assignment, node]
+ else:
+ return node
+
+ def _trim_whitespace(self, string: str, _ws_re: t.Pattern[str] = _ws_re) -> str:
+ return _ws_re.sub(" ", string.strip())
+
+ def _parse_block(
+ self, parser: "Parser", allow_pluralize: bool
+ ) -> t.Tuple[t.List[str], str]:
+ """Parse until the next block tag with a given name."""
+ referenced = []
+ buf = []
+
+ while True:
+ if parser.stream.current.type == "data":
+ buf.append(parser.stream.current.value.replace("%", "%%"))
+ next(parser.stream)
+ elif parser.stream.current.type == "variable_begin":
+ next(parser.stream)
+ name = parser.stream.expect("name").value
+ referenced.append(name)
+ buf.append(f"%({name})s")
+ parser.stream.expect("variable_end")
+ elif parser.stream.current.type == "block_begin":
+ next(parser.stream)
+ block_name = (
+ parser.stream.current.value
+ if parser.stream.current.type == "name"
+ else None
+ )
+ if block_name == "endtrans":
+ break
+ elif block_name == "pluralize":
+ if allow_pluralize:
+ break
+ parser.fail(
+ "a translatable section can have only one pluralize section"
+ )
+ elif block_name == "trans":
+ parser.fail(
+ "trans blocks can't be nested; did you mean `endtrans`?"
+ )
+ parser.fail(
+ f"control structures in translatable sections are not allowed; "
+ f"saw `{block_name}`"
+ )
+ elif parser.stream.eos:
+ parser.fail("unclosed translation block")
+ else:
+ raise RuntimeError("internal parser error")
+
+ return referenced, concat(buf)
+
+ def _make_node(
+ self,
+ singular: str,
+ plural: t.Optional[str],
+ context: t.Optional[str],
+ variables: t.Dict[str, nodes.Expr],
+ plural_expr: t.Optional[nodes.Expr],
+ vars_referenced: bool,
+ num_called_num: bool,
+ ) -> nodes.Output:
+ """Generates a useful node from the data provided."""
+ newstyle = self.environment.newstyle_gettext # type: ignore
+ node: nodes.Expr
+
+ # no variables referenced? no need to escape for old style
+ # gettext invocations only if there are vars.
+ if not vars_referenced and not newstyle:
+ singular = singular.replace("%%", "%")
+ if plural:
+ plural = plural.replace("%%", "%")
+
+ func_name = "gettext"
+ func_args: t.List[nodes.Expr] = [nodes.Const(singular)]
+
+ if context is not None:
+ func_args.insert(0, nodes.Const(context))
+ func_name = f"p{func_name}"
+
+ if plural_expr is not None:
+ func_name = f"n{func_name}"
+ func_args.extend((nodes.Const(plural), plural_expr))
+
+ node = nodes.Call(nodes.Name(func_name, "load"), func_args, [], None, None)
+
+ # in case newstyle gettext is used, the method is powerful
+ # enough to handle the variable expansion and autoescape
+ # handling itself
+ if newstyle:
+ for key, value in variables.items():
+ # the function adds that later anyways in case num was
+ # called num, so just skip it.
+ if num_called_num and key == "num":
+ continue
+ node.kwargs.append(nodes.Keyword(key, value))
+
+ # otherwise do that here
+ else:
+ # mark the return value as safe if we are in an
+ # environment with autoescaping turned on
+ node = nodes.MarkSafeIfAutoescape(node)
+ if variables:
+ node = nodes.Mod(
+ node,
+ nodes.Dict(
+ [
+ nodes.Pair(nodes.Const(key), value)
+ for key, value in variables.items()
+ ]
+ ),
+ )
+ return nodes.Output([node])
+
+
+class ExprStmtExtension(Extension):
+ """Adds a `do` tag to Jinja that works like the print statement just
+ that it doesn't print the return value.
+ """
+
+ tags = {"do"}
+
+ def parse(self, parser: "Parser") -> nodes.ExprStmt:
+ node = nodes.ExprStmt(lineno=next(parser.stream).lineno)
+ node.node = parser.parse_tuple()
+ return node
+
+
+class LoopControlExtension(Extension):
+ """Adds break and continue to the template engine."""
+
+ tags = {"break", "continue"}
+
+ def parse(self, parser: "Parser") -> t.Union[nodes.Break, nodes.Continue]:
+ token = next(parser.stream)
+ if token.value == "break":
+ return nodes.Break(lineno=token.lineno)
+ return nodes.Continue(lineno=token.lineno)
+
+
+class DebugExtension(Extension):
+ """A ``{% debug %}`` tag that dumps the available variables,
+ filters, and tests.
+
+ .. code-block:: html+jinja
+
+ {% debug %}
+
+ .. code-block:: text
+
+ {'context': {'cycler': ,
+ ...,
+ 'namespace': },
+ 'filters': ['abs', 'attr', 'batch', 'capitalize', 'center', 'count', 'd',
+ ..., 'urlencode', 'urlize', 'wordcount', 'wordwrap', 'xmlattr'],
+ 'tests': ['!=', '<', '<=', '==', '>', '>=', 'callable', 'defined',
+ ..., 'odd', 'sameas', 'sequence', 'string', 'undefined', 'upper']}
+
+ .. versionadded:: 2.11.0
+ """
+
+ tags = {"debug"}
+
+ def parse(self, parser: "Parser") -> nodes.Output:
+ lineno = parser.stream.expect("name:debug").lineno
+ context = nodes.ContextReference()
+ result = self.call_method("_render", [context], lineno=lineno)
+ return nodes.Output([result], lineno=lineno)
+
+ def _render(self, context: Context) -> str:
+ result = {
+ "context": context.get_all(),
+ "filters": sorted(self.environment.filters.keys()),
+ "tests": sorted(self.environment.tests.keys()),
+ }
+
+ # Set the depth since the intent is to show the top few names.
+ return pprint.pformat(result, depth=3, compact=True)
+
+
+def extract_from_ast(
+ ast: nodes.Template,
+ gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS,
+ babel_style: bool = True,
+) -> t.Iterator[
+ t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]]
+]:
+ """Extract localizable strings from the given template node. Per
+ default this function returns matches in babel style that means non string
+ parameters as well as keyword arguments are returned as `None`. This
+ allows Babel to figure out what you really meant if you are using
+ gettext functions that allow keyword arguments for placeholder expansion.
+ If you don't want that behavior set the `babel_style` parameter to `False`
+ which causes only strings to be returned and parameters are always stored
+ in tuples. As a consequence invalid gettext calls (calls without a single
+ string parameter or string parameters after non-string parameters) are
+ skipped.
+
+ This example explains the behavior:
+
+ >>> from jinja2 import Environment
+ >>> env = Environment()
+ >>> node = env.parse('{{ (_("foo"), _(), ngettext("foo", "bar", 42)) }}')
+ >>> list(extract_from_ast(node))
+ [(1, '_', 'foo'), (1, '_', ()), (1, 'ngettext', ('foo', 'bar', None))]
+ >>> list(extract_from_ast(node, babel_style=False))
+ [(1, '_', ('foo',)), (1, 'ngettext', ('foo', 'bar'))]
+
+ For every string found this function yields a ``(lineno, function,
+ message)`` tuple, where:
+
+ * ``lineno`` is the number of the line on which the string was found,
+ * ``function`` is the name of the ``gettext`` function used (if the
+ string was extracted from embedded Python code), and
+ * ``message`` is the string, or a tuple of strings for functions
+ with multiple string arguments.
+
+ This extraction function operates on the AST and is because of that unable
+ to extract any comments. For comment support you have to use the babel
+ extraction interface or extract comments yourself.
+ """
+ out: t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]
+
+ for node in ast.find_all(nodes.Call):
+ if (
+ not isinstance(node.node, nodes.Name)
+ or node.node.name not in gettext_functions
+ ):
+ continue
+
+ strings: t.List[t.Optional[str]] = []
+
+ for arg in node.args:
+ if isinstance(arg, nodes.Const) and isinstance(arg.value, str):
+ strings.append(arg.value)
+ else:
+ strings.append(None)
+
+ for _ in node.kwargs:
+ strings.append(None)
+ if node.dyn_args is not None:
+ strings.append(None)
+ if node.dyn_kwargs is not None:
+ strings.append(None)
+
+ if not babel_style:
+ out = tuple(x for x in strings if x is not None)
+
+ if not out:
+ continue
+ else:
+ if len(strings) == 1:
+ out = strings[0]
+ else:
+ out = tuple(strings)
+
+ yield node.lineno, node.node.name, out
+
+
+class _CommentFinder:
+ """Helper class to find comments in a token stream. Can only
+ find comments for gettext calls forwards. Once the comment
+ from line 4 is found, a comment for line 1 will not return a
+ usable value.
+ """
+
+ def __init__(
+ self, tokens: t.Sequence[t.Tuple[int, str, str]], comment_tags: t.Sequence[str]
+ ) -> None:
+ self.tokens = tokens
+ self.comment_tags = comment_tags
+ self.offset = 0
+ self.last_lineno = 0
+
+ def find_backwards(self, offset: int) -> t.List[str]:
+ try:
+ for _, token_type, token_value in reversed(
+ self.tokens[self.offset : offset]
+ ):
+ if token_type in ("comment", "linecomment"):
+ try:
+ prefix, comment = token_value.split(None, 1)
+ except ValueError:
+ continue
+ if prefix in self.comment_tags:
+ return [comment.rstrip()]
+ return []
+ finally:
+ self.offset = offset
+
+ def find_comments(self, lineno: int) -> t.List[str]:
+ if not self.comment_tags or self.last_lineno > lineno:
+ return []
+ for idx, (token_lineno, _, _) in enumerate(self.tokens[self.offset :]):
+ if token_lineno > lineno:
+ return self.find_backwards(self.offset + idx)
+ return self.find_backwards(len(self.tokens))
+
+
+def babel_extract(
+ fileobj: t.BinaryIO,
+ keywords: t.Sequence[str],
+ comment_tags: t.Sequence[str],
+ options: t.Dict[str, t.Any],
+) -> t.Iterator[
+ t.Tuple[
+ int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]], t.List[str]
+ ]
+]:
+ """Babel extraction method for Jinja templates.
+
+ .. versionchanged:: 2.3
+ Basic support for translation comments was added. If `comment_tags`
+ is now set to a list of keywords for extraction, the extractor will
+ try to find the best preceding comment that begins with one of the
+ keywords. For best results, make sure to not have more than one
+ gettext call in one line of code and the matching comment in the
+ same line or the line before.
+
+ .. versionchanged:: 2.5.1
+ The `newstyle_gettext` flag can be set to `True` to enable newstyle
+ gettext calls.
+
+ .. versionchanged:: 2.7
+ A `silent` option can now be provided. If set to `False` template
+ syntax errors are propagated instead of being ignored.
+
+ :param fileobj: the file-like object the messages should be extracted from
+ :param keywords: a list of keywords (i.e. function names) that should be
+ recognized as translation functions
+ :param comment_tags: a list of translator tags to search for and include
+ in the results.
+ :param options: a dictionary of additional options (optional)
+ :return: an iterator over ``(lineno, funcname, message, comments)`` tuples.
+ (comments will be empty currently)
+ """
+ extensions: t.Dict[t.Type[Extension], None] = {}
+
+ for extension_name in options.get("extensions", "").split(","):
+ extension_name = extension_name.strip()
+
+ if not extension_name:
+ continue
+
+ extensions[import_string(extension_name)] = None
+
+ if InternationalizationExtension not in extensions:
+ extensions[InternationalizationExtension] = None
+
+ def getbool(options: t.Mapping[str, str], key: str, default: bool = False) -> bool:
+ return options.get(key, str(default)).lower() in {"1", "on", "yes", "true"}
+
+ silent = getbool(options, "silent", True)
+ environment = Environment(
+ options.get("block_start_string", defaults.BLOCK_START_STRING),
+ options.get("block_end_string", defaults.BLOCK_END_STRING),
+ options.get("variable_start_string", defaults.VARIABLE_START_STRING),
+ options.get("variable_end_string", defaults.VARIABLE_END_STRING),
+ options.get("comment_start_string", defaults.COMMENT_START_STRING),
+ options.get("comment_end_string", defaults.COMMENT_END_STRING),
+ options.get("line_statement_prefix") or defaults.LINE_STATEMENT_PREFIX,
+ options.get("line_comment_prefix") or defaults.LINE_COMMENT_PREFIX,
+ getbool(options, "trim_blocks", defaults.TRIM_BLOCKS),
+ getbool(options, "lstrip_blocks", defaults.LSTRIP_BLOCKS),
+ defaults.NEWLINE_SEQUENCE,
+ getbool(options, "keep_trailing_newline", defaults.KEEP_TRAILING_NEWLINE),
+ tuple(extensions),
+ cache_size=0,
+ auto_reload=False,
+ )
+
+ if getbool(options, "trimmed"):
+ environment.policies["ext.i18n.trimmed"] = True
+ if getbool(options, "newstyle_gettext"):
+ environment.newstyle_gettext = True # type: ignore
+
+ source = fileobj.read().decode(options.get("encoding", "utf-8"))
+ try:
+ node = environment.parse(source)
+ tokens = list(environment.lex(environment.preprocess(source)))
+ except TemplateSyntaxError:
+ if not silent:
+ raise
+ # skip templates with syntax errors
+ return
+
+ finder = _CommentFinder(tokens, comment_tags)
+ for lineno, func, message in extract_from_ast(node, keywords):
+ yield lineno, func, message, finder.find_comments(lineno)
+
+
+#: nicer import names
+i18n = InternationalizationExtension
+do = ExprStmtExtension
+loopcontrols = LoopControlExtension
+debug = DebugExtension
diff --git a/stripe_config/.venv/lib/python3.12/site-packages/jinja2/filters.py b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/filters.py
new file mode 100644
index 0000000..2bcba4f
--- /dev/null
+++ b/stripe_config/.venv/lib/python3.12/site-packages/jinja2/filters.py
@@ -0,0 +1,1873 @@
+"""Built-in template filters used with the ``|`` operator."""
+
+import math
+import random
+import re
+import typing
+import typing as t
+from collections import abc
+from inspect import getattr_static
+from itertools import chain
+from itertools import groupby
+
+from markupsafe import escape
+from markupsafe import Markup
+from markupsafe import soft_str
+
+from .async_utils import async_variant
+from .async_utils import auto_aiter
+from .async_utils import auto_await
+from .async_utils import auto_to_list
+from .exceptions import FilterArgumentError
+from .runtime import Undefined
+from .utils import htmlsafe_json_dumps
+from .utils import pass_context
+from .utils import pass_environment
+from .utils import pass_eval_context
+from .utils import pformat
+from .utils import url_quote
+from .utils import urlize
+
+if t.TYPE_CHECKING:
+ import typing_extensions as te
+
+ from .environment import Environment
+ from .nodes import EvalContext
+ from .runtime import Context
+ from .sandbox import SandboxedEnvironment # noqa: F401
+
+ class HasHTML(te.Protocol):
+ def __html__(self) -> str:
+ pass
+
+
+F = t.TypeVar("F", bound=t.Callable[..., t.Any])
+K = t.TypeVar("K")
+V = t.TypeVar("V")
+
+
+def ignore_case(value: V) -> V:
+ """For use as a postprocessor for :func:`make_attrgetter`. Converts strings
+ to lowercase and returns other types as-is."""
+ if isinstance(value, str):
+ return t.cast(V, value.lower())
+
+ return value
+
+
+def make_attrgetter(
+ environment: "Environment",
+ attribute: t.Optional[t.Union[str, int]],
+ postprocess: t.Optional[t.Callable[[t.Any], t.Any]] = None,
+ default: t.Optional[t.Any] = None,
+) -> t.Callable[[t.Any], t.Any]:
+ """Returns a callable that looks up the given attribute from a
+ passed object with the rules of the environment. Dots are allowed
+ to access attributes of attributes. Integer parts in paths are
+ looked up as integers.
+ """
+ parts = _prepare_attribute_parts(attribute)
+
+ def attrgetter(item: t.Any) -> t.Any:
+ for part in parts:
+ item = environment.getitem(item, part)
+
+ if default is not None and isinstance(item, Undefined):
+ item = default
+
+ if postprocess is not None:
+ item = postprocess(item)
+
+ return item
+
+ return attrgetter
+
+
+def make_multi_attrgetter(
+ environment: "Environment",
+ attribute: t.Optional[t.Union[str, int]],
+ postprocess: t.Optional[t.Callable[[t.Any], t.Any]] = None,
+) -> t.Callable[[t.Any], t.List[t.Any]]:
+ """Returns a callable that looks up the given comma separated
+ attributes from a passed object with the rules of the environment.
+ Dots are allowed to access attributes of each attribute. Integer
+ parts in paths are looked up as integers.
+
+ The value returned by the returned callable is a list of extracted
+ attribute values.
+
+ Examples of attribute: "attr1,attr2", "attr1.inner1.0,attr2.inner2.0", etc.
+ """
+ if isinstance(attribute, str):
+ split: t.Sequence[t.Union[str, int, None]] = attribute.split(",")
+ else:
+ split = [attribute]
+
+ parts = [_prepare_attribute_parts(item) for item in split]
+
+ def attrgetter(item: t.Any) -> t.List[t.Any]:
+ items = [None] * len(parts)
+
+ for i, attribute_part in enumerate(parts):
+ item_i = item
+
+ for part in attribute_part:
+ item_i = environment.getitem(item_i, part)
+
+ if postprocess is not None:
+ item_i = postprocess(item_i)
+
+ items[i] = item_i
+
+ return items
+
+ return attrgetter
+
+
+def _prepare_attribute_parts(
+ attr: t.Optional[t.Union[str, int]],
+) -> t.List[t.Union[str, int]]:
+ if attr is None:
+ return []
+
+ if isinstance(attr, str):
+ return [int(x) if x.isdigit() else x for x in attr.split(".")]
+
+ return [attr]
+
+
+def do_forceescape(value: "t.Union[str, HasHTML]") -> Markup:
+ """Enforce HTML escaping. This will probably double escape variables."""
+ if hasattr(value, "__html__"):
+ value = t.cast("HasHTML", value).__html__()
+
+ return escape(str(value))
+
+
+def do_urlencode(
+ value: t.Union[str, t.Mapping[str, t.Any], t.Iterable[t.Tuple[str, t.Any]]],
+) -> str:
+ """Quote data for use in a URL path or query using UTF-8.
+
+ Basic wrapper around :func:`urllib.parse.quote` when given a
+ string, or :func:`urllib.parse.urlencode` for a dict or iterable.
+
+ :param value: Data to quote. A string will be quoted directly. A
+ dict or iterable of ``(key, value)`` pairs will be joined as a
+ query string.
+
+ When given a string, "/" is not quoted. HTTP servers treat "/" and
+ "%2F" equivalently in paths. If you need quoted slashes, use the
+ ``|replace("/", "%2F")`` filter.
+
+ .. versionadded:: 2.7
+ """
+ if isinstance(value, str) or not isinstance(value, abc.Iterable):
+ return url_quote(value)
+
+ if isinstance(value, dict):
+ items: t.Iterable[t.Tuple[str, t.Any]] = value.items()
+ else:
+ items = value # type: ignore
+
+ return "&".join(
+ f"{url_quote(k, for_qs=True)}={url_quote(v, for_qs=True)}" for k, v in items
+ )
+
+
+@pass_eval_context
+def do_replace(
+ eval_ctx: "EvalContext", s: str, old: str, new: str, count: t.Optional[int] = None
+) -> str:
+ """Return a copy of the value with all occurrences of a substring
+ replaced with a new one. The first argument is the substring
+ that should be replaced, the second is the replacement string.
+ If the optional third argument ``count`` is given, only the first
+ ``count`` occurrences are replaced:
+
+ .. sourcecode:: jinja
+
+ {{ "Hello World"|replace("Hello", "Goodbye") }}
+ -> Goodbye World
+
+ {{ "aaaaargh"|replace("a", "d'oh, ", 2) }}
+ -> d'oh, d'oh, aaargh
+ """
+ if count is None:
+ count = -1
+
+ if not eval_ctx.autoescape:
+ return str(s).replace(str(old), str(new), count)
+
+ if (
+ hasattr(old, "__html__")
+ or hasattr(new, "__html__")
+ and not hasattr(s, "__html__")
+ ):
+ s = escape(s)
+ else:
+ s = soft_str(s)
+
+ return s.replace(soft_str(old), soft_str(new), count)
+
+
+def do_upper(s: str) -> str:
+ """Convert a value to uppercase."""
+ return soft_str(s).upper()
+
+
+def do_lower(s: str) -> str:
+ """Convert a value to lowercase."""
+ return soft_str(s).lower()
+
+
+def do_items(value: t.Union[t.Mapping[K, V], Undefined]) -> t.Iterator[t.Tuple[K, V]]:
+ """Return an iterator over the ``(key, value)`` items of a mapping.
+
+ ``x|items`` is the same as ``x.items()``, except if ``x`` is
+ undefined an empty iterator is returned.
+
+ This filter is useful if you expect the template to be rendered with
+ an implementation of Jinja in another programming language that does
+ not have a ``.items()`` method on its mapping type.
+
+ .. code-block:: html+jinja
+
+
+ {% for key, value in my_dict|items %}
+ - {{ key }}
+
- {{ value }}
+ {% endfor %}
+
+
+ .. versionadded:: 3.1
+ """
+ if isinstance(value, Undefined):
+ return
+
+ if not isinstance(value, abc.Mapping):
+ raise TypeError("Can only get item pairs from a mapping.")
+
+ yield from value.items()
+
+
+# Check for characters that would move the parser state from key to value.
+# https://html.spec.whatwg.org/#attribute-name-state
+_attr_key_re = re.compile(r"[\s/>=]", flags=re.ASCII)
+
+
+@pass_eval_context
+def do_xmlattr(
+ eval_ctx: "EvalContext", d: t.Mapping[str, t.Any], autospace: bool = True
+) -> str:
+ """Create an SGML/XML attribute string based on the items in a dict.
+
+ **Values** that are neither ``none`` nor ``undefined`` are automatically
+ escaped, safely allowing untrusted user input.
+
+ User input should not be used as **keys** to this filter. If any key
+ contains a space, ``/`` solidus, ``>`` greater-than sign, or ``=`` equals
+ sign, this fails with a ``ValueError``. Regardless of this, user input
+ should never be used as keys to this filter, or must be separately validated
+ first.
+
+ .. sourcecode:: html+jinja
+
+
+
+ Results in something like this:
+
+ .. sourcecode:: html
+
+
+
+ As you can see it automatically prepends a space in front of the item
+ if the filter returned something unless the second parameter is false.
+
+ .. versionchanged:: 3.1.4
+ Keys with ``/`` solidus, ``>`` greater-than sign, or ``=`` equals sign
+ are not allowed.
+
+ .. versionchanged:: 3.1.3
+ Keys with spaces are not allowed.
+ """
+ items = []
+
+ for key, value in d.items():
+ if value is None or isinstance(value, Undefined):
+ continue
+
+ if _attr_key_re.search(key) is not None:
+ raise ValueError(f"Invalid character in attribute name: {key!r}")
+
+ items.append(f'{escape(key)}="{escape(value)}"')
+
+ rv = " ".join(items)
+
+ if autospace and rv:
+ rv = " " + rv
+
+ if eval_ctx.autoescape:
+ rv = Markup(rv)
+
+ return rv
+
+
+def do_capitalize(s: str) -> str:
+ """Capitalize a value. The first character will be uppercase, all others
+ lowercase.
+ """
+ return soft_str(s).capitalize()
+
+
+_word_beginning_split_re = re.compile(r"([-\s({\[<]+)")
+
+
+def do_title(s: str) -> str:
+ """Return a titlecased version of the value. I.e. words will start with
+ uppercase letters, all remaining characters are lowercase.
+ """
+ return "".join(
+ [
+ item[0].upper() + item[1:].lower()
+ for item in _word_beginning_split_re.split(soft_str(s))
+ if item
+ ]
+ )
+
+
+def do_dictsort(
+ value: t.Mapping[K, V],
+ case_sensitive: bool = False,
+ by: 'te.Literal["key", "value"]' = "key",
+ reverse: bool = False,
+) -> t.List[t.Tuple[K, V]]:
+ """Sort a dict and yield (key, value) pairs. Python dicts may not
+ be in the order you want to display them in, so sort them first.
+
+ .. sourcecode:: jinja
+
+ {% for key, value in mydict|dictsort %}
+ sort the dict by key, case insensitive
+
+ {% for key, value in mydict|dictsort(reverse=true) %}
+ sort the dict by key, case insensitive, reverse order
+
+ {% for key, value in mydict|dictsort(true) %}
+ sort the dict by key, case sensitive
+
+ {% for key, value in mydict|dictsort(false, 'value') %}
+ sort the dict by value, case insensitive
+ """
+ if by == "key":
+ pos = 0
+ elif by == "value":
+ pos = 1
+ else:
+ raise FilterArgumentError('You can only sort by either "key" or "value"')
+
+ def sort_func(item: t.Tuple[t.Any, t.Any]) -> t.Any:
+ value = item[pos]
+
+ if not case_sensitive:
+ value = ignore_case(value)
+
+ return value
+
+ return sorted(value.items(), key=sort_func, reverse=reverse)
+
+
+@pass_environment
+def do_sort(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ reverse: bool = False,
+ case_sensitive: bool = False,
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> "t.List[V]":
+ """Sort an iterable using Python's :func:`sorted`.
+
+ .. sourcecode:: jinja
+
+ {% for city in cities|sort %}
+ ...
+ {% endfor %}
+
+ :param reverse: Sort descending instead of ascending.
+ :param case_sensitive: When sorting strings, sort upper and lower
+ case separately.
+ :param attribute: When sorting objects or dicts, an attribute or
+ key to sort by. Can use dot notation like ``"address.city"``.
+ Can be a list of attributes like ``"age,name"``.
+
+ The sort is stable, it does not change the relative order of
+ elements that compare equal. This makes it is possible to chain
+ sorts on different attributes and ordering.
+
+ .. sourcecode:: jinja
+
+ {% for user in users|sort(attribute="name")
+ |sort(reverse=true, attribute="age") %}
+ ...
+ {% endfor %}
+
+ As a shortcut to chaining when the direction is the same for all
+ attributes, pass a comma separate list of attributes.
+
+ .. sourcecode:: jinja
+
+ {% for user in users|sort(attribute="age,name") %}
+ ...
+ {% endfor %}
+
+ .. versionchanged:: 2.11.0
+ The ``attribute`` parameter can be a comma separated list of
+ attributes, e.g. ``"age,name"``.
+
+ .. versionchanged:: 2.6
+ The ``attribute`` parameter was added.
+ """
+ key_func = make_multi_attrgetter(
+ environment, attribute, postprocess=ignore_case if not case_sensitive else None
+ )
+ return sorted(value, key=key_func, reverse=reverse)
+
+
+@pass_environment
+def sync_do_unique(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ case_sensitive: bool = False,
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> "t.Iterator[V]":
+ """Returns a list of unique items from the given iterable.
+
+ .. sourcecode:: jinja
+
+ {{ ['foo', 'bar', 'foobar', 'FooBar']|unique|list }}
+ -> ['foo', 'bar', 'foobar']
+
+ The unique items are yielded in the same order as their first occurrence in
+ the iterable passed to the filter.
+
+ :param case_sensitive: Treat upper and lower case strings as distinct.
+ :param attribute: Filter objects with unique values for this attribute.
+ """
+ getter = make_attrgetter(
+ environment, attribute, postprocess=ignore_case if not case_sensitive else None
+ )
+ seen = set()
+
+ for item in value:
+ key = getter(item)
+
+ if key not in seen:
+ seen.add(key)
+ yield item
+
+
+@async_variant(sync_do_unique) # type: ignore
+async def do_unique(
+ environment: "Environment",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ case_sensitive: bool = False,
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> "t.Iterator[V]":
+ return sync_do_unique(
+ environment, await auto_to_list(value), case_sensitive, attribute
+ )
+
+
+def _min_or_max(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ func: "t.Callable[..., V]",
+ case_sensitive: bool,
+ attribute: t.Optional[t.Union[str, int]],
+) -> "t.Union[V, Undefined]":
+ it = iter(value)
+
+ try:
+ first = next(it)
+ except StopIteration:
+ return environment.undefined("No aggregated item, sequence was empty.")
+
+ key_func = make_attrgetter(
+ environment, attribute, postprocess=ignore_case if not case_sensitive else None
+ )
+ return func(chain([first], it), key=key_func)
+
+
+@pass_environment
+def do_min(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ case_sensitive: bool = False,
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> "t.Union[V, Undefined]":
+ """Return the smallest item from the sequence.
+
+ .. sourcecode:: jinja
+
+ {{ [1, 2, 3]|min }}
+ -> 1
+
+ :param case_sensitive: Treat upper and lower case strings as distinct.
+ :param attribute: Get the object with the min value of this attribute.
+ """
+ return _min_or_max(environment, value, min, case_sensitive, attribute)
+
+
+@pass_environment
+def do_max(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ case_sensitive: bool = False,
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> "t.Union[V, Undefined]":
+ """Return the largest item from the sequence.
+
+ .. sourcecode:: jinja
+
+ {{ [1, 2, 3]|max }}
+ -> 3
+
+ :param case_sensitive: Treat upper and lower case strings as distinct.
+ :param attribute: Get the object with the max value of this attribute.
+ """
+ return _min_or_max(environment, value, max, case_sensitive, attribute)
+
+
+def do_default(
+ value: V,
+ default_value: V = "", # type: ignore
+ boolean: bool = False,
+) -> V:
+ """If the value is undefined it will return the passed default value,
+ otherwise the value of the variable:
+
+ .. sourcecode:: jinja
+
+ {{ my_variable|default('my_variable is not defined') }}
+
+ This will output the value of ``my_variable`` if the variable was
+ defined, otherwise ``'my_variable is not defined'``. If you want
+ to use default with variables that evaluate to false you have to
+ set the second parameter to `true`:
+
+ .. sourcecode:: jinja
+
+ {{ ''|default('the string was empty', true) }}
+
+ .. versionchanged:: 2.11
+ It's now possible to configure the :class:`~jinja2.Environment` with
+ :class:`~jinja2.ChainableUndefined` to make the `default` filter work
+ on nested elements and attributes that may contain undefined values
+ in the chain without getting an :exc:`~jinja2.UndefinedError`.
+ """
+ if isinstance(value, Undefined) or (boolean and not value):
+ return default_value
+
+ return value
+
+
+@pass_eval_context
+def sync_do_join(
+ eval_ctx: "EvalContext",
+ value: t.Iterable[t.Any],
+ d: str = "",
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> str:
+ """Return a string which is the concatenation of the strings in the
+ sequence. The separator between elements is an empty string per
+ default, you can define it with the optional parameter:
+
+ .. sourcecode:: jinja
+
+ {{ [1, 2, 3]|join('|') }}
+ -> 1|2|3
+
+ {{ [1, 2, 3]|join }}
+ -> 123
+
+ It is also possible to join certain attributes of an object:
+
+ .. sourcecode:: jinja
+
+ {{ users|join(', ', attribute='username') }}
+
+ .. versionadded:: 2.6
+ The `attribute` parameter was added.
+ """
+ if attribute is not None:
+ value = map(make_attrgetter(eval_ctx.environment, attribute), value)
+
+ # no automatic escaping? joining is a lot easier then
+ if not eval_ctx.autoescape:
+ return str(d).join(map(str, value))
+
+ # if the delimiter doesn't have an html representation we check
+ # if any of the items has. If yes we do a coercion to Markup
+ if not hasattr(d, "__html__"):
+ value = list(value)
+ do_escape = False
+
+ for idx, item in enumerate(value):
+ if hasattr(item, "__html__"):
+ do_escape = True
+ else:
+ value[idx] = str(item)
+
+ if do_escape:
+ d = escape(d)
+ else:
+ d = str(d)
+
+ return d.join(value)
+
+ # no html involved, to normal joining
+ return soft_str(d).join(map(soft_str, value))
+
+
+@async_variant(sync_do_join) # type: ignore
+async def do_join(
+ eval_ctx: "EvalContext",
+ value: t.Union[t.AsyncIterable[t.Any], t.Iterable[t.Any]],
+ d: str = "",
+ attribute: t.Optional[t.Union[str, int]] = None,
+) -> str:
+ return sync_do_join(eval_ctx, await auto_to_list(value), d, attribute)
+
+
+def do_center(value: str, width: int = 80) -> str:
+ """Centers the value in a field of a given width."""
+ return soft_str(value).center(width)
+
+
+@pass_environment
+def sync_do_first(
+ environment: "Environment", seq: "t.Iterable[V]"
+) -> "t.Union[V, Undefined]":
+ """Return the first item of a sequence."""
+ try:
+ return next(iter(seq))
+ except StopIteration:
+ return environment.undefined("No first item, sequence was empty.")
+
+
+@async_variant(sync_do_first) # type: ignore
+async def do_first(
+ environment: "Environment", seq: "t.Union[t.AsyncIterable[V], t.Iterable[V]]"
+) -> "t.Union[V, Undefined]":
+ try:
+ return await auto_aiter(seq).__anext__()
+ except StopAsyncIteration:
+ return environment.undefined("No first item, sequence was empty.")
+
+
+@pass_environment
+def do_last(
+ environment: "Environment", seq: "t.Reversible[V]"
+) -> "t.Union[V, Undefined]":
+ """Return the last item of a sequence.
+
+ Note: Does not work with generators. You may want to explicitly
+ convert it to a list:
+
+ .. sourcecode:: jinja
+
+ {{ data | selectattr('name', '==', 'Jinja') | list | last }}
+ """
+ try:
+ return next(iter(reversed(seq)))
+ except StopIteration:
+ return environment.undefined("No last item, sequence was empty.")
+
+
+# No async do_last, it may not be safe in async mode.
+
+
+@pass_context
+def do_random(context: "Context", seq: "t.Sequence[V]") -> "t.Union[V, Undefined]":
+ """Return a random item from the sequence."""
+ try:
+ return random.choice(seq)
+ except IndexError:
+ return context.environment.undefined("No random item, sequence was empty.")
+
+
+def do_filesizeformat(value: t.Union[str, float, int], binary: bool = False) -> str:
+ """Format the value like a 'human-readable' file size (i.e. 13 kB,
+ 4.1 MB, 102 Bytes, etc). Per default decimal prefixes are used (Mega,
+ Giga, etc.), if the second parameter is set to `True` the binary
+ prefixes are used (Mebi, Gibi).
+ """
+ bytes = float(value)
+ base = 1024 if binary else 1000
+ prefixes = [
+ ("KiB" if binary else "kB"),
+ ("MiB" if binary else "MB"),
+ ("GiB" if binary else "GB"),
+ ("TiB" if binary else "TB"),
+ ("PiB" if binary else "PB"),
+ ("EiB" if binary else "EB"),
+ ("ZiB" if binary else "ZB"),
+ ("YiB" if binary else "YB"),
+ ]
+
+ if bytes == 1:
+ return "1 Byte"
+ elif bytes < base:
+ return f"{int(bytes)} Bytes"
+ else:
+ for i, prefix in enumerate(prefixes):
+ unit = base ** (i + 2)
+
+ if bytes < unit:
+ return f"{base * bytes / unit:.1f} {prefix}"
+
+ return f"{base * bytes / unit:.1f} {prefix}"
+
+
+def do_pprint(value: t.Any) -> str:
+ """Pretty print a variable. Useful for debugging."""
+ return pformat(value)
+
+
+_uri_scheme_re = re.compile(r"^([\w.+-]{2,}:(/){0,2})$")
+
+
+@pass_eval_context
+def do_urlize(
+ eval_ctx: "EvalContext",
+ value: str,
+ trim_url_limit: t.Optional[int] = None,
+ nofollow: bool = False,
+ target: t.Optional[str] = None,
+ rel: t.Optional[str] = None,
+ extra_schemes: t.Optional[t.Iterable[str]] = None,
+) -> str:
+ """Convert URLs in text into clickable links.
+
+ This may not recognize links in some situations. Usually, a more
+ comprehensive formatter, such as a Markdown library, is a better
+ choice.
+
+ Works on ``http://``, ``https://``, ``www.``, ``mailto:``, and email
+ addresses. Links with trailing punctuation (periods, commas, closing
+ parentheses) and leading punctuation (opening parentheses) are
+ recognized excluding the punctuation. Email addresses that include
+ header fields are not recognized (for example,
+ ``mailto:address@example.com?cc=copy@example.com``).
+
+ :param value: Original text containing URLs to link.
+ :param trim_url_limit: Shorten displayed URL values to this length.
+ :param nofollow: Add the ``rel=nofollow`` attribute to links.
+ :param target: Add the ``target`` attribute to links.
+ :param rel: Add the ``rel`` attribute to links.
+ :param extra_schemes: Recognize URLs that start with these schemes
+ in addition to the default behavior. Defaults to
+ ``env.policies["urlize.extra_schemes"]``, which defaults to no
+ extra schemes.
+
+ .. versionchanged:: 3.0
+ The ``extra_schemes`` parameter was added.
+
+ .. versionchanged:: 3.0
+ Generate ``https://`` links for URLs without a scheme.
+
+ .. versionchanged:: 3.0
+ The parsing rules were updated. Recognize email addresses with
+ or without the ``mailto:`` scheme. Validate IP addresses. Ignore
+ parentheses and brackets in more cases.
+
+ .. versionchanged:: 2.8
+ The ``target`` parameter was added.
+ """
+ policies = eval_ctx.environment.policies
+ rel_parts = set((rel or "").split())
+
+ if nofollow:
+ rel_parts.add("nofollow")
+
+ rel_parts.update((policies["urlize.rel"] or "").split())
+ rel = " ".join(sorted(rel_parts)) or None
+
+ if target is None:
+ target = policies["urlize.target"]
+
+ if extra_schemes is None:
+ extra_schemes = policies["urlize.extra_schemes"] or ()
+
+ for scheme in extra_schemes:
+ if _uri_scheme_re.fullmatch(scheme) is None:
+ raise FilterArgumentError(f"{scheme!r} is not a valid URI scheme prefix.")
+
+ rv = urlize(
+ value,
+ trim_url_limit=trim_url_limit,
+ rel=rel,
+ target=target,
+ extra_schemes=extra_schemes,
+ )
+
+ if eval_ctx.autoescape:
+ rv = Markup(rv)
+
+ return rv
+
+
+def do_indent(
+ s: str, width: t.Union[int, str] = 4, first: bool = False, blank: bool = False
+) -> str:
+ """Return a copy of the string with each line indented by 4 spaces. The
+ first line and blank lines are not indented by default.
+
+ :param width: Number of spaces, or a string, to indent by.
+ :param first: Don't skip indenting the first line.
+ :param blank: Don't skip indenting empty lines.
+
+ .. versionchanged:: 3.0
+ ``width`` can be a string.
+
+ .. versionchanged:: 2.10
+ Blank lines are not indented by default.
+
+ Rename the ``indentfirst`` argument to ``first``.
+ """
+ if isinstance(width, str):
+ indention = width
+ else:
+ indention = " " * width
+
+ newline = "\n"
+
+ if isinstance(s, Markup):
+ indention = Markup(indention)
+ newline = Markup(newline)
+
+ s += newline # this quirk is necessary for splitlines method
+
+ if blank:
+ rv = (newline + indention).join(s.splitlines())
+ else:
+ lines = s.splitlines()
+ rv = lines.pop(0)
+
+ if lines:
+ rv += newline + newline.join(
+ indention + line if line else line for line in lines
+ )
+
+ if first:
+ rv = indention + rv
+
+ return rv
+
+
+@pass_environment
+def do_truncate(
+ env: "Environment",
+ s: str,
+ length: int = 255,
+ killwords: bool = False,
+ end: str = "...",
+ leeway: t.Optional[int] = None,
+) -> str:
+ """Return a truncated copy of the string. The length is specified
+ with the first parameter which defaults to ``255``. If the second
+ parameter is ``true`` the filter will cut the text at length. Otherwise
+ it will discard the last word. If the text was in fact
+ truncated it will append an ellipsis sign (``"..."``). If you want a
+ different ellipsis sign than ``"..."`` you can specify it using the
+ third parameter. Strings that only exceed the length by the tolerance
+ margin given in the fourth parameter will not be truncated.
+
+ .. sourcecode:: jinja
+
+ {{ "foo bar baz qux"|truncate(9) }}
+ -> "foo..."
+ {{ "foo bar baz qux"|truncate(9, True) }}
+ -> "foo ba..."
+ {{ "foo bar baz qux"|truncate(11) }}
+ -> "foo bar baz qux"
+ {{ "foo bar baz qux"|truncate(11, False, '...', 0) }}
+ -> "foo bar..."
+
+ The default leeway on newer Jinja versions is 5 and was 0 before but
+ can be reconfigured globally.
+ """
+ if leeway is None:
+ leeway = env.policies["truncate.leeway"]
+
+ assert length >= len(end), f"expected length >= {len(end)}, got {length}"
+ assert leeway >= 0, f"expected leeway >= 0, got {leeway}"
+
+ if len(s) <= length + leeway:
+ return s
+
+ if killwords:
+ return s[: length - len(end)] + end
+
+ result = s[: length - len(end)].rsplit(" ", 1)[0]
+ return result + end
+
+
+@pass_environment
+def do_wordwrap(
+ environment: "Environment",
+ s: str,
+ width: int = 79,
+ break_long_words: bool = True,
+ wrapstring: t.Optional[str] = None,
+ break_on_hyphens: bool = True,
+) -> str:
+ """Wrap a string to the given width. Existing newlines are treated
+ as paragraphs to be wrapped separately.
+
+ :param s: Original text to wrap.
+ :param width: Maximum length of wrapped lines.
+ :param break_long_words: If a word is longer than ``width``, break
+ it across lines.
+ :param break_on_hyphens: If a word contains hyphens, it may be split
+ across lines.
+ :param wrapstring: String to join each wrapped line. Defaults to
+ :attr:`Environment.newline_sequence`.
+
+ .. versionchanged:: 2.11
+ Existing newlines are treated as paragraphs wrapped separately.
+
+ .. versionchanged:: 2.11
+ Added the ``break_on_hyphens`` parameter.
+
+ .. versionchanged:: 2.7
+ Added the ``wrapstring`` parameter.
+ """
+ import textwrap
+
+ if wrapstring is None:
+ wrapstring = environment.newline_sequence
+
+ # textwrap.wrap doesn't consider existing newlines when wrapping.
+ # If the string has a newline before width, wrap will still insert
+ # a newline at width, resulting in a short line. Instead, split and
+ # wrap each paragraph individually.
+ return wrapstring.join(
+ [
+ wrapstring.join(
+ textwrap.wrap(
+ line,
+ width=width,
+ expand_tabs=False,
+ replace_whitespace=False,
+ break_long_words=break_long_words,
+ break_on_hyphens=break_on_hyphens,
+ )
+ )
+ for line in s.splitlines()
+ ]
+ )
+
+
+_word_re = re.compile(r"\w+")
+
+
+def do_wordcount(s: str) -> int:
+ """Count the words in that string."""
+ return len(_word_re.findall(soft_str(s)))
+
+
+def do_int(value: t.Any, default: int = 0, base: int = 10) -> int:
+ """Convert the value into an integer. If the
+ conversion doesn't work it will return ``0``. You can
+ override this default using the first parameter. You
+ can also override the default base (10) in the second
+ parameter, which handles input with prefixes such as
+ 0b, 0o and 0x for bases 2, 8 and 16 respectively.
+ The base is ignored for decimal numbers and non-string values.
+ """
+ try:
+ if isinstance(value, str):
+ return int(value, base)
+
+ return int(value)
+ except (TypeError, ValueError):
+ # this quirk is necessary so that "42.23"|int gives 42.
+ try:
+ return int(float(value))
+ except (TypeError, ValueError, OverflowError):
+ return default
+
+
+def do_float(value: t.Any, default: float = 0.0) -> float:
+ """Convert the value into a floating point number. If the
+ conversion doesn't work it will return ``0.0``. You can
+ override this default using the first parameter.
+ """
+ try:
+ return float(value)
+ except (TypeError, ValueError):
+ return default
+
+
+def do_format(value: str, *args: t.Any, **kwargs: t.Any) -> str:
+ """Apply the given values to a `printf-style`_ format string, like
+ ``string % values``.
+
+ .. sourcecode:: jinja
+
+ {{ "%s, %s!"|format(greeting, name) }}
+ Hello, World!
+
+ In most cases it should be more convenient and efficient to use the
+ ``%`` operator or :meth:`str.format`.
+
+ .. code-block:: text
+
+ {{ "%s, %s!" % (greeting, name) }}
+ {{ "{}, {}!".format(greeting, name) }}
+
+ .. _printf-style: https://docs.python.org/library/stdtypes.html
+ #printf-style-string-formatting
+ """
+ if args and kwargs:
+ raise FilterArgumentError(
+ "can't handle positional and keyword arguments at the same time"
+ )
+
+ return soft_str(value) % (kwargs or args)
+
+
+def do_trim(value: str, chars: t.Optional[str] = None) -> str:
+ """Strip leading and trailing characters, by default whitespace."""
+ return soft_str(value).strip(chars)
+
+
+def do_striptags(value: "t.Union[str, HasHTML]") -> str:
+ """Strip SGML/XML tags and replace adjacent whitespace by one space."""
+ if hasattr(value, "__html__"):
+ value = t.cast("HasHTML", value).__html__()
+
+ return Markup(str(value)).striptags()
+
+
+def sync_do_slice(
+ value: "t.Collection[V]", slices: int, fill_with: "t.Optional[V]" = None
+) -> "t.Iterator[t.List[V]]":
+ """Slice an iterator and return a list of lists containing
+ those items. Useful if you want to create a div containing
+ three ul tags that represent columns:
+
+ .. sourcecode:: html+jinja
+
+
+ {%- for column in items|slice(3) %}
+
+ {%- for item in column %}
+ - {{ item }}
+ {%- endfor %}
+
+ {%- endfor %}
+
+
+ If you pass it a second argument it's used to fill missing
+ values on the last iteration.
+ """
+ seq = list(value)
+ length = len(seq)
+ items_per_slice = length // slices
+ slices_with_extra = length % slices
+ offset = 0
+
+ for slice_number in range(slices):
+ start = offset + slice_number * items_per_slice
+
+ if slice_number < slices_with_extra:
+ offset += 1
+
+ end = offset + (slice_number + 1) * items_per_slice
+ tmp = seq[start:end]
+
+ if fill_with is not None and slice_number >= slices_with_extra:
+ tmp.append(fill_with)
+
+ yield tmp
+
+
+@async_variant(sync_do_slice) # type: ignore
+async def do_slice(
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ slices: int,
+ fill_with: t.Optional[t.Any] = None,
+) -> "t.Iterator[t.List[V]]":
+ return sync_do_slice(await auto_to_list(value), slices, fill_with)
+
+
+def do_batch(
+ value: "t.Iterable[V]", linecount: int, fill_with: "t.Optional[V]" = None
+) -> "t.Iterator[t.List[V]]":
+ """
+ A filter that batches items. It works pretty much like `slice`
+ just the other way round. It returns a list of lists with the
+ given number of items. If you provide a second parameter this
+ is used to fill up missing items. See this example:
+
+ .. sourcecode:: html+jinja
+
+
+ {%- for row in items|batch(3, ' ') %}
+
+ {%- for column in row %}
+ | {{ column }} |
+ {%- endfor %}
+
+ {%- endfor %}
+
+ """
+ tmp: t.List[V] = []
+
+ for item in value:
+ if len(tmp) == linecount:
+ yield tmp
+ tmp = []
+
+ tmp.append(item)
+
+ if tmp:
+ if fill_with is not None and len(tmp) < linecount:
+ tmp += [fill_with] * (linecount - len(tmp))
+
+ yield tmp
+
+
+def do_round(
+ value: float,
+ precision: int = 0,
+ method: 'te.Literal["common", "ceil", "floor"]' = "common",
+) -> float:
+ """Round the number to a given precision. The first
+ parameter specifies the precision (default is ``0``), the
+ second the rounding method:
+
+ - ``'common'`` rounds either up or down
+ - ``'ceil'`` always rounds up
+ - ``'floor'`` always rounds down
+
+ If you don't specify a method ``'common'`` is used.
+
+ .. sourcecode:: jinja
+
+ {{ 42.55|round }}
+ -> 43.0
+ {{ 42.55|round(1, 'floor') }}
+ -> 42.5
+
+ Note that even if rounded to 0 precision, a float is returned. If
+ you need a real integer, pipe it through `int`:
+
+ .. sourcecode:: jinja
+
+ {{ 42.55|round|int }}
+ -> 43
+ """
+ if method not in {"common", "ceil", "floor"}:
+ raise FilterArgumentError("method must be common, ceil or floor")
+
+ if method == "common":
+ return round(value, precision)
+
+ func = getattr(math, method)
+ return t.cast(float, func(value * (10**precision)) / (10**precision))
+
+
+class _GroupTuple(t.NamedTuple):
+ grouper: t.Any
+ list: t.List[t.Any]
+
+ # Use the regular tuple repr to hide this subclass if users print
+ # out the value during debugging.
+ def __repr__(self) -> str:
+ return tuple.__repr__(self)
+
+ def __str__(self) -> str:
+ return tuple.__str__(self)
+
+
+@pass_environment
+def sync_do_groupby(
+ environment: "Environment",
+ value: "t.Iterable[V]",
+ attribute: t.Union[str, int],
+ default: t.Optional[t.Any] = None,
+ case_sensitive: bool = False,
+) -> "t.List[_GroupTuple]":
+ """Group a sequence of objects by an attribute using Python's
+ :func:`itertools.groupby`. The attribute can use dot notation for
+ nested access, like ``"address.city"``. Unlike Python's ``groupby``,
+ the values are sorted first so only one group is returned for each
+ unique value.
+
+ For example, a list of ``User`` objects with a ``city`` attribute
+ can be rendered in groups. In this example, ``grouper`` refers to
+ the ``city`` value of the group.
+
+ .. sourcecode:: html+jinja
+
+ {% for city, items in users|groupby("city") %}
+ - {{ city }}
+
{% for user in items %}
+ - {{ user.name }}
+ {% endfor %}
+
+ {% endfor %}
+
+ ``groupby`` yields namedtuples of ``(grouper, list)``, which
+ can be used instead of the tuple unpacking above. ``grouper`` is the
+ value of the attribute, and ``list`` is the items with that value.
+
+ .. sourcecode:: html+jinja
+
+ {% for group in users|groupby("city") %}
+ - {{ group.grouper }}: {{ group.list|join(", ") }}
+ {% endfor %}
+
+ You can specify a ``default`` value to use if an object in the list
+ does not have the given attribute.
+
+ .. sourcecode:: jinja
+
+ {% for city, items in users|groupby("city", default="NY") %}
+ - {{ city }}: {{ items|map(attribute="name")|join(", ") }}
+ {% endfor %}
+
+ Like the :func:`~jinja-filters.sort` filter, sorting and grouping is
+ case-insensitive by default. The ``key`` for each group will have
+ the case of the first item in that group of values. For example, if
+ a list of users has cities ``["CA", "NY", "ca"]``, the "CA" group
+ will have two values. This can be disabled by passing
+ ``case_sensitive=True``.
+
+ .. versionchanged:: 3.1
+ Added the ``case_sensitive`` parameter. Sorting and grouping is
+ case-insensitive by default, matching other filters that do
+ comparisons.
+
+ .. versionchanged:: 3.0
+ Added the ``default`` parameter.
+
+ .. versionchanged:: 2.6
+ The attribute supports dot notation for nested access.
+ """
+ expr = make_attrgetter(
+ environment,
+ attribute,
+ postprocess=ignore_case if not case_sensitive else None,
+ default=default,
+ )
+ out = [
+ _GroupTuple(key, list(values))
+ for key, values in groupby(sorted(value, key=expr), expr)
+ ]
+
+ if not case_sensitive:
+ # Return the real key from the first value instead of the lowercase key.
+ output_expr = make_attrgetter(environment, attribute, default=default)
+ out = [_GroupTuple(output_expr(values[0]), values) for _, values in out]
+
+ return out
+
+
+@async_variant(sync_do_groupby) # type: ignore
+async def do_groupby(
+ environment: "Environment",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ attribute: t.Union[str, int],
+ default: t.Optional[t.Any] = None,
+ case_sensitive: bool = False,
+) -> "t.List[_GroupTuple]":
+ expr = make_attrgetter(
+ environment,
+ attribute,
+ postprocess=ignore_case if not case_sensitive else None,
+ default=default,
+ )
+ out = [
+ _GroupTuple(key, await auto_to_list(values))
+ for key, values in groupby(sorted(await auto_to_list(value), key=expr), expr)
+ ]
+
+ if not case_sensitive:
+ # Return the real key from the first value instead of the lowercase key.
+ output_expr = make_attrgetter(environment, attribute, default=default)
+ out = [_GroupTuple(output_expr(values[0]), values) for _, values in out]
+
+ return out
+
+
+@pass_environment
+def sync_do_sum(
+ environment: "Environment",
+ iterable: "t.Iterable[V]",
+ attribute: t.Optional[t.Union[str, int]] = None,
+ start: V = 0, # type: ignore
+) -> V:
+ """Returns the sum of a sequence of numbers plus the value of parameter
+ 'start' (which defaults to 0). When the sequence is empty it returns
+ start.
+
+ It is also possible to sum up only certain attributes:
+
+ .. sourcecode:: jinja
+
+ Total: {{ items|sum(attribute='price') }}
+
+ .. versionchanged:: 2.6
+ The ``attribute`` parameter was added to allow summing up over
+ attributes. Also the ``start`` parameter was moved on to the right.
+ """
+ if attribute is not None:
+ iterable = map(make_attrgetter(environment, attribute), iterable)
+
+ return sum(iterable, start) # type: ignore[no-any-return, call-overload]
+
+
+@async_variant(sync_do_sum) # type: ignore
+async def do_sum(
+ environment: "Environment",
+ iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ attribute: t.Optional[t.Union[str, int]] = None,
+ start: V = 0, # type: ignore
+) -> V:
+ rv = start
+
+ if attribute is not None:
+ func = make_attrgetter(environment, attribute)
+ else:
+
+ def func(x: V) -> V:
+ return x
+
+ async for item in auto_aiter(iterable):
+ rv += func(item)
+
+ return rv
+
+
+def sync_do_list(value: "t.Iterable[V]") -> "t.List[V]":
+ """Convert the value into a list. If it was a string the returned list
+ will be a list of characters.
+ """
+ return list(value)
+
+
+@async_variant(sync_do_list) # type: ignore
+async def do_list(value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]") -> "t.List[V]":
+ return await auto_to_list(value)
+
+
+def do_mark_safe(value: str) -> Markup:
+ """Mark the value as safe which means that in an environment with automatic
+ escaping enabled this variable will not be escaped.
+ """
+ return Markup(value)
+
+
+def do_mark_unsafe(value: str) -> str:
+ """Mark a value as unsafe. This is the reverse operation for :func:`safe`."""
+ return str(value)
+
+
+@typing.overload
+def do_reverse(value: str) -> str: ...
+
+
+@typing.overload
+def do_reverse(value: "t.Iterable[V]") -> "t.Iterable[V]": ...
+
+
+def do_reverse(value: t.Union[str, t.Iterable[V]]) -> t.Union[str, t.Iterable[V]]:
+ """Reverse the object or return an iterator that iterates over it the other
+ way round.
+ """
+ if isinstance(value, str):
+ return value[::-1]
+
+ try:
+ return reversed(value) # type: ignore
+ except TypeError:
+ try:
+ rv = list(value)
+ rv.reverse()
+ return rv
+ except TypeError as e:
+ raise FilterArgumentError("argument must be iterable") from e
+
+
+@pass_environment
+def do_attr(
+ environment: "Environment", obj: t.Any, name: str
+) -> t.Union[Undefined, t.Any]:
+ """Get an attribute of an object. ``foo|attr("bar")`` works like
+ ``foo.bar``, but returns undefined instead of falling back to ``foo["bar"]``
+ if the attribute doesn't exist.
+
+ See :ref:`Notes on subscriptions ` for more details.
+ """
+ # Environment.getattr will fall back to obj[name] if obj.name doesn't exist.
+ # But we want to call env.getattr to get behavior such as sandboxing.
+ # Determine if the attr exists first, so we know the fallback won't trigger.
+ try:
+ # This avoids executing properties/descriptors, but misses __getattr__
+ # and __getattribute__ dynamic attrs.
+ getattr_static(obj, name)
+ except AttributeError:
+ # This finds dynamic attrs, and we know it's not a descriptor at this point.
+ if not hasattr(obj, name):
+ return environment.undefined(obj=obj, name=name)
+
+ return environment.getattr(obj, name)
+
+
+@typing.overload
+def sync_do_map(
+ context: "Context",
+ value: t.Iterable[t.Any],
+ name: str,
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> t.Iterable[t.Any]: ...
+
+
+@typing.overload
+def sync_do_map(
+ context: "Context",
+ value: t.Iterable[t.Any],
+ *,
+ attribute: str = ...,
+ default: t.Optional[t.Any] = None,
+) -> t.Iterable[t.Any]: ...
+
+
+@pass_context
+def sync_do_map(
+ context: "Context", value: t.Iterable[t.Any], *args: t.Any, **kwargs: t.Any
+) -> t.Iterable[t.Any]:
+ """Applies a filter on a sequence of objects or looks up an attribute.
+ This is useful when dealing with lists of objects but you are really
+ only interested in a certain value of it.
+
+ The basic usage is mapping on an attribute. Imagine you have a list
+ of users but you are only interested in a list of usernames:
+
+ .. sourcecode:: jinja
+
+ Users on this page: {{ users|map(attribute='username')|join(', ') }}
+
+ You can specify a ``default`` value to use if an object in the list
+ does not have the given attribute.
+
+ .. sourcecode:: jinja
+
+ {{ users|map(attribute="username", default="Anonymous")|join(", ") }}
+
+ Alternatively you can let it invoke a filter by passing the name of the
+ filter and the arguments afterwards. A good example would be applying a
+ text conversion filter on a sequence:
+
+ .. sourcecode:: jinja
+
+ Users on this page: {{ titles|map('lower')|join(', ') }}
+
+ Similar to a generator comprehension such as:
+
+ .. code-block:: python
+
+ (u.username for u in users)
+ (getattr(u, "username", "Anonymous") for u in users)
+ (do_lower(x) for x in titles)
+
+ .. versionchanged:: 2.11.0
+ Added the ``default`` parameter.
+
+ .. versionadded:: 2.7
+ """
+ if value:
+ func = prepare_map(context, args, kwargs)
+
+ for item in value:
+ yield func(item)
+
+
+@typing.overload
+def do_map(
+ context: "Context",
+ value: t.Union[t.AsyncIterable[t.Any], t.Iterable[t.Any]],
+ name: str,
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> t.Iterable[t.Any]: ...
+
+
+@typing.overload
+def do_map(
+ context: "Context",
+ value: t.Union[t.AsyncIterable[t.Any], t.Iterable[t.Any]],
+ *,
+ attribute: str = ...,
+ default: t.Optional[t.Any] = None,
+) -> t.Iterable[t.Any]: ...
+
+
+@async_variant(sync_do_map) # type: ignore
+async def do_map(
+ context: "Context",
+ value: t.Union[t.AsyncIterable[t.Any], t.Iterable[t.Any]],
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> t.AsyncIterable[t.Any]:
+ if value:
+ func = prepare_map(context, args, kwargs)
+
+ async for item in auto_aiter(value):
+ yield await auto_await(func(item))
+
+
+@pass_context
+def sync_do_select(
+ context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any
+) -> "t.Iterator[V]":
+ """Filters a sequence of objects by applying a test to each object,
+ and only selecting the objects with the test succeeding.
+
+ If no test is specified, each object will be evaluated as a boolean.
+
+ Example usage:
+
+ .. sourcecode:: jinja
+
+ {{ numbers|select("odd") }}
+ {{ numbers|select("odd") }}
+ {{ numbers|select("divisibleby", 3) }}
+ {{ numbers|select("lessthan", 42) }}
+ {{ strings|select("equalto", "mystring") }}
+
+ Similar to a generator comprehension such as:
+
+ .. code-block:: python
+
+ (n for n in numbers if test_odd(n))
+ (n for n in numbers if test_divisibleby(n, 3))
+
+ .. versionadded:: 2.7
+ """
+ return select_or_reject(context, value, args, kwargs, lambda x: x, False)
+
+
+@async_variant(sync_do_select) # type: ignore
+async def do_select(
+ context: "Context",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> "t.AsyncIterator[V]":
+ return async_select_or_reject(context, value, args, kwargs, lambda x: x, False)
+
+
+@pass_context
+def sync_do_reject(
+ context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any
+) -> "t.Iterator[V]":
+ """Filters a sequence of objects by applying a test to each object,
+ and rejecting the objects with the test succeeding.
+
+ If no test is specified, each object will be evaluated as a boolean.
+
+ Example usage:
+
+ .. sourcecode:: jinja
+
+ {{ numbers|reject("odd") }}
+
+ Similar to a generator comprehension such as:
+
+ .. code-block:: python
+
+ (n for n in numbers if not test_odd(n))
+
+ .. versionadded:: 2.7
+ """
+ return select_or_reject(context, value, args, kwargs, lambda x: not x, False)
+
+
+@async_variant(sync_do_reject) # type: ignore
+async def do_reject(
+ context: "Context",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> "t.AsyncIterator[V]":
+ return async_select_or_reject(context, value, args, kwargs, lambda x: not x, False)
+
+
+@pass_context
+def sync_do_selectattr(
+ context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any
+) -> "t.Iterator[V]":
+ """Filters a sequence of objects by applying a test to the specified
+ attribute of each object, and only selecting the objects with the
+ test succeeding.
+
+ If no test is specified, the attribute's value will be evaluated as
+ a boolean.
+
+ Example usage:
+
+ .. sourcecode:: jinja
+
+ {{ users|selectattr("is_active") }}
+ {{ users|selectattr("email", "none") }}
+
+ Similar to a generator comprehension such as:
+
+ .. code-block:: python
+
+ (user for user in users if user.is_active)
+ (user for user in users if test_none(user.email))
+
+ .. versionadded:: 2.7
+ """
+ return select_or_reject(context, value, args, kwargs, lambda x: x, True)
+
+
+@async_variant(sync_do_selectattr) # type: ignore
+async def do_selectattr(
+ context: "Context",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> "t.AsyncIterator[V]":
+ return async_select_or_reject(context, value, args, kwargs, lambda x: x, True)
+
+
+@pass_context
+def sync_do_rejectattr(
+ context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any
+) -> "t.Iterator[V]":
+ """Filters a sequence of objects by applying a test to the specified
+ attribute of each object, and rejecting the objects with the test
+ succeeding.
+
+ If no test is specified, the attribute's value will be evaluated as
+ a boolean.
+
+ .. sourcecode:: jinja
+
+ {{ users|rejectattr("is_active") }}
+ {{ users|rejectattr("email", "none") }}
+
+ Similar to a generator comprehension such as:
+
+ .. code-block:: python
+
+ (user for user in users if not user.is_active)
+ (user for user in users if not test_none(user.email))
+
+ .. versionadded:: 2.7
+ """
+ return select_or_reject(context, value, args, kwargs, lambda x: not x, True)
+
+
+@async_variant(sync_do_rejectattr) # type: ignore
+async def do_rejectattr(
+ context: "Context",
+ value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
+ *args: t.Any,
+ **kwargs: t.Any,
+) -> "t.AsyncIterator[V]":
+ return async_select_or_reject(context, value, args, kwargs, lambda x: not x, True)
+
+
+@pass_eval_context
+def do_tojson(
+ eval_ctx: "EvalContext", value: t.Any, indent: t.Optional[int] = None
+) -> Markup:
+ """Serialize an object to a string of JSON, and mark it safe to
+ render in HTML. This filter is only for use in HTML documents.
+
+ The returned string is safe to render in HTML documents and
+ ``