update windows build to Python 3.7
This commit is contained in:
parent
73105fa71e
commit
ddc59ab92d
5761 changed files with 750298 additions and 213405 deletions
8
.gitignore
vendored
8
.gitignore
vendored
|
@ -1,8 +0,0 @@
|
||||||
*.swp
|
|
||||||
*.pyc
|
|
||||||
*.pyo
|
|
||||||
__pycache__
|
|
||||||
pip_cache
|
|
||||||
.DS_Store
|
|
||||||
build
|
|
||||||
dist
|
|
339
COPYING
339
COPYING
|
@ -1,339 +0,0 @@
|
||||||
GNU GENERAL PUBLIC LICENSE
|
|
||||||
Version 2, June 1991
|
|
||||||
|
|
||||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
|
||||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The licenses for most software are designed to take away your
|
|
||||||
freedom to share and change it. By contrast, the GNU General Public
|
|
||||||
License is intended to guarantee your freedom to share and change free
|
|
||||||
software--to make sure the software is free for all its users. This
|
|
||||||
General Public License applies to most of the Free Software
|
|
||||||
Foundation's software and to any other program whose authors commit to
|
|
||||||
using it. (Some other Free Software Foundation software is covered by
|
|
||||||
the GNU Lesser General Public License instead.) You can apply it to
|
|
||||||
your programs, too.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
|
||||||
have the freedom to distribute copies of free software (and charge for
|
|
||||||
this service if you wish), that you receive source code or can get it
|
|
||||||
if you want it, that you can change the software or use pieces of it
|
|
||||||
in new free programs; and that you know you can do these things.
|
|
||||||
|
|
||||||
To protect your rights, we need to make restrictions that forbid
|
|
||||||
anyone to deny you these rights or to ask you to surrender the rights.
|
|
||||||
These restrictions translate to certain responsibilities for you if you
|
|
||||||
distribute copies of the software, or if you modify it.
|
|
||||||
|
|
||||||
For example, if you distribute copies of such a program, whether
|
|
||||||
gratis or for a fee, you must give the recipients all the rights that
|
|
||||||
you have. You must make sure that they, too, receive or can get the
|
|
||||||
source code. And you must show them these terms so they know their
|
|
||||||
rights.
|
|
||||||
|
|
||||||
We protect your rights with two steps: (1) copyright the software, and
|
|
||||||
(2) offer you this license which gives you legal permission to copy,
|
|
||||||
distribute and/or modify the software.
|
|
||||||
|
|
||||||
Also, for each author's protection and ours, we want to make certain
|
|
||||||
that everyone understands that there is no warranty for this free
|
|
||||||
software. If the software is modified by someone else and passed on, we
|
|
||||||
want its recipients to know that what they have is not the original, so
|
|
||||||
that any problems introduced by others will not reflect on the original
|
|
||||||
authors' reputations.
|
|
||||||
|
|
||||||
Finally, any free program is threatened constantly by software
|
|
||||||
patents. We wish to avoid the danger that redistributors of a free
|
|
||||||
program will individually obtain patent licenses, in effect making the
|
|
||||||
program proprietary. To prevent this, we have made it clear that any
|
|
||||||
patent must be licensed for everyone's free use or not licensed at all.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow.
|
|
||||||
|
|
||||||
GNU GENERAL PUBLIC LICENSE
|
|
||||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
|
||||||
|
|
||||||
0. This License applies to any program or other work which contains
|
|
||||||
a notice placed by the copyright holder saying it may be distributed
|
|
||||||
under the terms of this General Public License. The "Program", below,
|
|
||||||
refers to any such program or work, and a "work based on the Program"
|
|
||||||
means either the Program or any derivative work under copyright law:
|
|
||||||
that is to say, a work containing the Program or a portion of it,
|
|
||||||
either verbatim or with modifications and/or translated into another
|
|
||||||
language. (Hereinafter, translation is included without limitation in
|
|
||||||
the term "modification".) Each licensee is addressed as "you".
|
|
||||||
|
|
||||||
Activities other than copying, distribution and modification are not
|
|
||||||
covered by this License; they are outside its scope. The act of
|
|
||||||
running the Program is not restricted, and the output from the Program
|
|
||||||
is covered only if its contents constitute a work based on the
|
|
||||||
Program (independent of having been made by running the Program).
|
|
||||||
Whether that is true depends on what the Program does.
|
|
||||||
|
|
||||||
1. You may copy and distribute verbatim copies of the Program's
|
|
||||||
source code as you receive it, in any medium, provided that you
|
|
||||||
conspicuously and appropriately publish on each copy an appropriate
|
|
||||||
copyright notice and disclaimer of warranty; keep intact all the
|
|
||||||
notices that refer to this License and to the absence of any warranty;
|
|
||||||
and give any other recipients of the Program a copy of this License
|
|
||||||
along with the Program.
|
|
||||||
|
|
||||||
You may charge a fee for the physical act of transferring a copy, and
|
|
||||||
you may at your option offer warranty protection in exchange for a fee.
|
|
||||||
|
|
||||||
2. You may modify your copy or copies of the Program or any portion
|
|
||||||
of it, thus forming a work based on the Program, and copy and
|
|
||||||
distribute such modifications or work under the terms of Section 1
|
|
||||||
above, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) You must cause the modified files to carry prominent notices
|
|
||||||
stating that you changed the files and the date of any change.
|
|
||||||
|
|
||||||
b) You must cause any work that you distribute or publish, that in
|
|
||||||
whole or in part contains or is derived from the Program or any
|
|
||||||
part thereof, to be licensed as a whole at no charge to all third
|
|
||||||
parties under the terms of this License.
|
|
||||||
|
|
||||||
c) If the modified program normally reads commands interactively
|
|
||||||
when run, you must cause it, when started running for such
|
|
||||||
interactive use in the most ordinary way, to print or display an
|
|
||||||
announcement including an appropriate copyright notice and a
|
|
||||||
notice that there is no warranty (or else, saying that you provide
|
|
||||||
a warranty) and that users may redistribute the program under
|
|
||||||
these conditions, and telling the user how to view a copy of this
|
|
||||||
License. (Exception: if the Program itself is interactive but
|
|
||||||
does not normally print such an announcement, your work based on
|
|
||||||
the Program is not required to print an announcement.)
|
|
||||||
|
|
||||||
These requirements apply to the modified work as a whole. If
|
|
||||||
identifiable sections of that work are not derived from the Program,
|
|
||||||
and can be reasonably considered independent and separate works in
|
|
||||||
themselves, then this License, and its terms, do not apply to those
|
|
||||||
sections when you distribute them as separate works. But when you
|
|
||||||
distribute the same sections as part of a whole which is a work based
|
|
||||||
on the Program, the distribution of the whole must be on the terms of
|
|
||||||
this License, whose permissions for other licensees extend to the
|
|
||||||
entire whole, and thus to each and every part regardless of who wrote it.
|
|
||||||
|
|
||||||
Thus, it is not the intent of this section to claim rights or contest
|
|
||||||
your rights to work written entirely by you; rather, the intent is to
|
|
||||||
exercise the right to control the distribution of derivative or
|
|
||||||
collective works based on the Program.
|
|
||||||
|
|
||||||
In addition, mere aggregation of another work not based on the Program
|
|
||||||
with the Program (or with a work based on the Program) on a volume of
|
|
||||||
a storage or distribution medium does not bring the other work under
|
|
||||||
the scope of this License.
|
|
||||||
|
|
||||||
3. You may copy and distribute the Program (or a work based on it,
|
|
||||||
under Section 2) in object code or executable form under the terms of
|
|
||||||
Sections 1 and 2 above provided that you also do one of the following:
|
|
||||||
|
|
||||||
a) Accompany it with the complete corresponding machine-readable
|
|
||||||
source code, which must be distributed under the terms of Sections
|
|
||||||
1 and 2 above on a medium customarily used for software interchange; or,
|
|
||||||
|
|
||||||
b) Accompany it with a written offer, valid for at least three
|
|
||||||
years, to give any third party, for a charge no more than your
|
|
||||||
cost of physically performing source distribution, a complete
|
|
||||||
machine-readable copy of the corresponding source code, to be
|
|
||||||
distributed under the terms of Sections 1 and 2 above on a medium
|
|
||||||
customarily used for software interchange; or,
|
|
||||||
|
|
||||||
c) Accompany it with the information you received as to the offer
|
|
||||||
to distribute corresponding source code. (This alternative is
|
|
||||||
allowed only for noncommercial distribution and only if you
|
|
||||||
received the program in object code or executable form with such
|
|
||||||
an offer, in accord with Subsection b above.)
|
|
||||||
|
|
||||||
The source code for a work means the preferred form of the work for
|
|
||||||
making modifications to it. For an executable work, complete source
|
|
||||||
code means all the source code for all modules it contains, plus any
|
|
||||||
associated interface definition files, plus the scripts used to
|
|
||||||
control compilation and installation of the executable. However, as a
|
|
||||||
special exception, the source code distributed need not include
|
|
||||||
anything that is normally distributed (in either source or binary
|
|
||||||
form) with the major components (compiler, kernel, and so on) of the
|
|
||||||
operating system on which the executable runs, unless that component
|
|
||||||
itself accompanies the executable.
|
|
||||||
|
|
||||||
If distribution of executable or object code is made by offering
|
|
||||||
access to copy from a designated place, then offering equivalent
|
|
||||||
access to copy the source code from the same place counts as
|
|
||||||
distribution of the source code, even though third parties are not
|
|
||||||
compelled to copy the source along with the object code.
|
|
||||||
|
|
||||||
4. You may not copy, modify, sublicense, or distribute the Program
|
|
||||||
except as expressly provided under this License. Any attempt
|
|
||||||
otherwise to copy, modify, sublicense or distribute the Program is
|
|
||||||
void, and will automatically terminate your rights under this License.
|
|
||||||
However, parties who have received copies, or rights, from you under
|
|
||||||
this License will not have their licenses terminated so long as such
|
|
||||||
parties remain in full compliance.
|
|
||||||
|
|
||||||
5. You are not required to accept this License, since you have not
|
|
||||||
signed it. However, nothing else grants you permission to modify or
|
|
||||||
distribute the Program or its derivative works. These actions are
|
|
||||||
prohibited by law if you do not accept this License. Therefore, by
|
|
||||||
modifying or distributing the Program (or any work based on the
|
|
||||||
Program), you indicate your acceptance of this License to do so, and
|
|
||||||
all its terms and conditions for copying, distributing or modifying
|
|
||||||
the Program or works based on it.
|
|
||||||
|
|
||||||
6. Each time you redistribute the Program (or any work based on the
|
|
||||||
Program), the recipient automatically receives a license from the
|
|
||||||
original licensor to copy, distribute or modify the Program subject to
|
|
||||||
these terms and conditions. You may not impose any further
|
|
||||||
restrictions on the recipients' exercise of the rights granted herein.
|
|
||||||
You are not responsible for enforcing compliance by third parties to
|
|
||||||
this License.
|
|
||||||
|
|
||||||
7. If, as a consequence of a court judgment or allegation of patent
|
|
||||||
infringement or for any other reason (not limited to patent issues),
|
|
||||||
conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot
|
|
||||||
distribute so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you
|
|
||||||
may not distribute the Program at all. For example, if a patent
|
|
||||||
license would not permit royalty-free redistribution of the Program by
|
|
||||||
all those who receive copies directly or indirectly through you, then
|
|
||||||
the only way you could satisfy both it and this License would be to
|
|
||||||
refrain entirely from distribution of the Program.
|
|
||||||
|
|
||||||
If any portion of this section is held invalid or unenforceable under
|
|
||||||
any particular circumstance, the balance of the section is intended to
|
|
||||||
apply and the section as a whole is intended to apply in other
|
|
||||||
circumstances.
|
|
||||||
|
|
||||||
It is not the purpose of this section to induce you to infringe any
|
|
||||||
patents or other property right claims or to contest validity of any
|
|
||||||
such claims; this section has the sole purpose of protecting the
|
|
||||||
integrity of the free software distribution system, which is
|
|
||||||
implemented by public license practices. Many people have made
|
|
||||||
generous contributions to the wide range of software distributed
|
|
||||||
through that system in reliance on consistent application of that
|
|
||||||
system; it is up to the author/donor to decide if he or she is willing
|
|
||||||
to distribute software through any other system and a licensee cannot
|
|
||||||
impose that choice.
|
|
||||||
|
|
||||||
This section is intended to make thoroughly clear what is believed to
|
|
||||||
be a consequence of the rest of this License.
|
|
||||||
|
|
||||||
8. If the distribution and/or use of the Program is restricted in
|
|
||||||
certain countries either by patents or by copyrighted interfaces, the
|
|
||||||
original copyright holder who places the Program under this License
|
|
||||||
may add an explicit geographical distribution limitation excluding
|
|
||||||
those countries, so that distribution is permitted only in or among
|
|
||||||
countries not thus excluded. In such case, this License incorporates
|
|
||||||
the limitation as if written in the body of this License.
|
|
||||||
|
|
||||||
9. The Free Software Foundation may publish revised and/or new versions
|
|
||||||
of the General Public License from time to time. Such new versions will
|
|
||||||
be similar in spirit to the present version, but may differ in detail to
|
|
||||||
address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the Program
|
|
||||||
specifies a version number of this License which applies to it and "any
|
|
||||||
later version", you have the option of following the terms and conditions
|
|
||||||
either of that version or of any later version published by the Free
|
|
||||||
Software Foundation. If the Program does not specify a version number of
|
|
||||||
this License, you may choose any version ever published by the Free Software
|
|
||||||
Foundation.
|
|
||||||
|
|
||||||
10. If you wish to incorporate parts of the Program into other free
|
|
||||||
programs whose distribution conditions are different, write to the author
|
|
||||||
to ask for permission. For software which is copyrighted by the Free
|
|
||||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
|
||||||
make exceptions for this. Our decision will be guided by the two goals
|
|
||||||
of preserving the free status of all derivatives of our free software and
|
|
||||||
of promoting the sharing and reuse of software generally.
|
|
||||||
|
|
||||||
NO WARRANTY
|
|
||||||
|
|
||||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
|
||||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
|
||||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
|
||||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
|
||||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
||||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
|
||||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
|
||||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
|
||||||
REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
|
||||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
|
||||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
|
||||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
|
||||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
|
||||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
|
||||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
|
||||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
|
||||||
POSSIBILITY OF SUCH DAMAGES.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
How to Apply These Terms to Your New Programs
|
|
||||||
|
|
||||||
If you develop a new program, and you want it to be of the greatest
|
|
||||||
possible use to the public, the best way to achieve this is to make it
|
|
||||||
free software which everyone can redistribute and change under these terms.
|
|
||||||
|
|
||||||
To do so, attach the following notices to the program. It is safest
|
|
||||||
to attach them to the start of each source file to most effectively
|
|
||||||
convey the exclusion of warranty; and each file should have at least
|
|
||||||
the "copyright" line and a pointer to where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the program's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This program is free software; you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU General Public License as published by
|
|
||||||
the Free Software Foundation; either version 2 of the License, or
|
|
||||||
(at your option) any later version.
|
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
GNU General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU General Public License along
|
|
||||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
|
||||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
If the program is interactive, make it output a short notice like this
|
|
||||||
when it starts in an interactive mode:
|
|
||||||
|
|
||||||
Gnomovision version 69, Copyright (C) year name of author
|
|
||||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
|
||||||
This is free software, and you are welcome to redistribute it
|
|
||||||
under certain conditions; type `show c' for details.
|
|
||||||
|
|
||||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
|
||||||
parts of the General Public License. Of course, the commands you use may
|
|
||||||
be called something other than `show w' and `show c'; they could even be
|
|
||||||
mouse-clicks or menu items--whatever suits your program.
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or your
|
|
||||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
|
||||||
necessary. Here is a sample; alter the names:
|
|
||||||
|
|
||||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
|
||||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
|
||||||
|
|
||||||
<signature of Ty Coon>, 1 April 1989
|
|
||||||
Ty Coon, President of Vice
|
|
||||||
|
|
||||||
This General Public License does not permit incorporating your program into
|
|
||||||
proprietary programs. If your program is a subroutine library, you may
|
|
||||||
consider it more useful to permit linking proprietary applications with the
|
|
||||||
library. If this is what you want to do, use the GNU Lesser General
|
|
||||||
Public License instead of this License.
|
|
674
COPYING3
674
COPYING3
|
@ -1,674 +0,0 @@
|
||||||
GNU GENERAL PUBLIC LICENSE
|
|
||||||
Version 3, 29 June 2007
|
|
||||||
|
|
||||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The GNU General Public License is a free, copyleft license for
|
|
||||||
software and other kinds of works.
|
|
||||||
|
|
||||||
The licenses for most software and other practical works are designed
|
|
||||||
to take away your freedom to share and change the works. By contrast,
|
|
||||||
the GNU General Public License is intended to guarantee your freedom to
|
|
||||||
share and change all versions of a program--to make sure it remains free
|
|
||||||
software for all its users. We, the Free Software Foundation, use the
|
|
||||||
GNU General Public License for most of our software; it applies also to
|
|
||||||
any other work released this way by its authors. You can apply it to
|
|
||||||
your programs, too.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
|
||||||
have the freedom to distribute copies of free software (and charge for
|
|
||||||
them if you wish), that you receive source code or can get it if you
|
|
||||||
want it, that you can change the software or use pieces of it in new
|
|
||||||
free programs, and that you know you can do these things.
|
|
||||||
|
|
||||||
To protect your rights, we need to prevent others from denying you
|
|
||||||
these rights or asking you to surrender the rights. Therefore, you have
|
|
||||||
certain responsibilities if you distribute copies of the software, or if
|
|
||||||
you modify it: responsibilities to respect the freedom of others.
|
|
||||||
|
|
||||||
For example, if you distribute copies of such a program, whether
|
|
||||||
gratis or for a fee, you must pass on to the recipients the same
|
|
||||||
freedoms that you received. You must make sure that they, too, receive
|
|
||||||
or can get the source code. And you must show them these terms so they
|
|
||||||
know their rights.
|
|
||||||
|
|
||||||
Developers that use the GNU GPL protect your rights with two steps:
|
|
||||||
(1) assert copyright on the software, and (2) offer you this License
|
|
||||||
giving you legal permission to copy, distribute and/or modify it.
|
|
||||||
|
|
||||||
For the developers' and authors' protection, the GPL clearly explains
|
|
||||||
that there is no warranty for this free software. For both users' and
|
|
||||||
authors' sake, the GPL requires that modified versions be marked as
|
|
||||||
changed, so that their problems will not be attributed erroneously to
|
|
||||||
authors of previous versions.
|
|
||||||
|
|
||||||
Some devices are designed to deny users access to install or run
|
|
||||||
modified versions of the software inside them, although the manufacturer
|
|
||||||
can do so. This is fundamentally incompatible with the aim of
|
|
||||||
protecting users' freedom to change the software. The systematic
|
|
||||||
pattern of such abuse occurs in the area of products for individuals to
|
|
||||||
use, which is precisely where it is most unacceptable. Therefore, we
|
|
||||||
have designed this version of the GPL to prohibit the practice for those
|
|
||||||
products. If such problems arise substantially in other domains, we
|
|
||||||
stand ready to extend this provision to those domains in future versions
|
|
||||||
of the GPL, as needed to protect the freedom of users.
|
|
||||||
|
|
||||||
Finally, every program is threatened constantly by software patents.
|
|
||||||
States should not allow patents to restrict development and use of
|
|
||||||
software on general-purpose computers, but in those that do, we wish to
|
|
||||||
avoid the special danger that patents applied to a free program could
|
|
||||||
make it effectively proprietary. To prevent this, the GPL assures that
|
|
||||||
patents cannot be used to render the program non-free.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow.
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
0. Definitions.
|
|
||||||
|
|
||||||
"This License" refers to version 3 of the GNU General Public License.
|
|
||||||
|
|
||||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
|
||||||
works, such as semiconductor masks.
|
|
||||||
|
|
||||||
"The Program" refers to any copyrightable work licensed under this
|
|
||||||
License. Each licensee is addressed as "you". "Licensees" and
|
|
||||||
"recipients" may be individuals or organizations.
|
|
||||||
|
|
||||||
To "modify" a work means to copy from or adapt all or part of the work
|
|
||||||
in a fashion requiring copyright permission, other than the making of an
|
|
||||||
exact copy. The resulting work is called a "modified version" of the
|
|
||||||
earlier work or a work "based on" the earlier work.
|
|
||||||
|
|
||||||
A "covered work" means either the unmodified Program or a work based
|
|
||||||
on the Program.
|
|
||||||
|
|
||||||
To "propagate" a work means to do anything with it that, without
|
|
||||||
permission, would make you directly or secondarily liable for
|
|
||||||
infringement under applicable copyright law, except executing it on a
|
|
||||||
computer or modifying a private copy. Propagation includes copying,
|
|
||||||
distribution (with or without modification), making available to the
|
|
||||||
public, and in some countries other activities as well.
|
|
||||||
|
|
||||||
To "convey" a work means any kind of propagation that enables other
|
|
||||||
parties to make or receive copies. Mere interaction with a user through
|
|
||||||
a computer network, with no transfer of a copy, is not conveying.
|
|
||||||
|
|
||||||
An interactive user interface displays "Appropriate Legal Notices"
|
|
||||||
to the extent that it includes a convenient and prominently visible
|
|
||||||
feature that (1) displays an appropriate copyright notice, and (2)
|
|
||||||
tells the user that there is no warranty for the work (except to the
|
|
||||||
extent that warranties are provided), that licensees may convey the
|
|
||||||
work under this License, and how to view a copy of this License. If
|
|
||||||
the interface presents a list of user commands or options, such as a
|
|
||||||
menu, a prominent item in the list meets this criterion.
|
|
||||||
|
|
||||||
1. Source Code.
|
|
||||||
|
|
||||||
The "source code" for a work means the preferred form of the work
|
|
||||||
for making modifications to it. "Object code" means any non-source
|
|
||||||
form of a work.
|
|
||||||
|
|
||||||
A "Standard Interface" means an interface that either is an official
|
|
||||||
standard defined by a recognized standards body, or, in the case of
|
|
||||||
interfaces specified for a particular programming language, one that
|
|
||||||
is widely used among developers working in that language.
|
|
||||||
|
|
||||||
The "System Libraries" of an executable work include anything, other
|
|
||||||
than the work as a whole, that (a) is included in the normal form of
|
|
||||||
packaging a Major Component, but which is not part of that Major
|
|
||||||
Component, and (b) serves only to enable use of the work with that
|
|
||||||
Major Component, or to implement a Standard Interface for which an
|
|
||||||
implementation is available to the public in source code form. A
|
|
||||||
"Major Component", in this context, means a major essential component
|
|
||||||
(kernel, window system, and so on) of the specific operating system
|
|
||||||
(if any) on which the executable work runs, or a compiler used to
|
|
||||||
produce the work, or an object code interpreter used to run it.
|
|
||||||
|
|
||||||
The "Corresponding Source" for a work in object code form means all
|
|
||||||
the source code needed to generate, install, and (for an executable
|
|
||||||
work) run the object code and to modify the work, including scripts to
|
|
||||||
control those activities. However, it does not include the work's
|
|
||||||
System Libraries, or general-purpose tools or generally available free
|
|
||||||
programs which are used unmodified in performing those activities but
|
|
||||||
which are not part of the work. For example, Corresponding Source
|
|
||||||
includes interface definition files associated with source files for
|
|
||||||
the work, and the source code for shared libraries and dynamically
|
|
||||||
linked subprograms that the work is specifically designed to require,
|
|
||||||
such as by intimate data communication or control flow between those
|
|
||||||
subprograms and other parts of the work.
|
|
||||||
|
|
||||||
The Corresponding Source need not include anything that users
|
|
||||||
can regenerate automatically from other parts of the Corresponding
|
|
||||||
Source.
|
|
||||||
|
|
||||||
The Corresponding Source for a work in source code form is that
|
|
||||||
same work.
|
|
||||||
|
|
||||||
2. Basic Permissions.
|
|
||||||
|
|
||||||
All rights granted under this License are granted for the term of
|
|
||||||
copyright on the Program, and are irrevocable provided the stated
|
|
||||||
conditions are met. This License explicitly affirms your unlimited
|
|
||||||
permission to run the unmodified Program. The output from running a
|
|
||||||
covered work is covered by this License only if the output, given its
|
|
||||||
content, constitutes a covered work. This License acknowledges your
|
|
||||||
rights of fair use or other equivalent, as provided by copyright law.
|
|
||||||
|
|
||||||
You may make, run and propagate covered works that you do not
|
|
||||||
convey, without conditions so long as your license otherwise remains
|
|
||||||
in force. You may convey covered works to others for the sole purpose
|
|
||||||
of having them make modifications exclusively for you, or provide you
|
|
||||||
with facilities for running those works, provided that you comply with
|
|
||||||
the terms of this License in conveying all material for which you do
|
|
||||||
not control copyright. Those thus making or running the covered works
|
|
||||||
for you must do so exclusively on your behalf, under your direction
|
|
||||||
and control, on terms that prohibit them from making any copies of
|
|
||||||
your copyrighted material outside their relationship with you.
|
|
||||||
|
|
||||||
Conveying under any other circumstances is permitted solely under
|
|
||||||
the conditions stated below. Sublicensing is not allowed; section 10
|
|
||||||
makes it unnecessary.
|
|
||||||
|
|
||||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
|
||||||
|
|
||||||
No covered work shall be deemed part of an effective technological
|
|
||||||
measure under any applicable law fulfilling obligations under article
|
|
||||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
|
||||||
similar laws prohibiting or restricting circumvention of such
|
|
||||||
measures.
|
|
||||||
|
|
||||||
When you convey a covered work, you waive any legal power to forbid
|
|
||||||
circumvention of technological measures to the extent such circumvention
|
|
||||||
is effected by exercising rights under this License with respect to
|
|
||||||
the covered work, and you disclaim any intention to limit operation or
|
|
||||||
modification of the work as a means of enforcing, against the work's
|
|
||||||
users, your or third parties' legal rights to forbid circumvention of
|
|
||||||
technological measures.
|
|
||||||
|
|
||||||
4. Conveying Verbatim Copies.
|
|
||||||
|
|
||||||
You may convey verbatim copies of the Program's source code as you
|
|
||||||
receive it, in any medium, provided that you conspicuously and
|
|
||||||
appropriately publish on each copy an appropriate copyright notice;
|
|
||||||
keep intact all notices stating that this License and any
|
|
||||||
non-permissive terms added in accord with section 7 apply to the code;
|
|
||||||
keep intact all notices of the absence of any warranty; and give all
|
|
||||||
recipients a copy of this License along with the Program.
|
|
||||||
|
|
||||||
You may charge any price or no price for each copy that you convey,
|
|
||||||
and you may offer support or warranty protection for a fee.
|
|
||||||
|
|
||||||
5. Conveying Modified Source Versions.
|
|
||||||
|
|
||||||
You may convey a work based on the Program, or the modifications to
|
|
||||||
produce it from the Program, in the form of source code under the
|
|
||||||
terms of section 4, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) The work must carry prominent notices stating that you modified
|
|
||||||
it, and giving a relevant date.
|
|
||||||
|
|
||||||
b) The work must carry prominent notices stating that it is
|
|
||||||
released under this License and any conditions added under section
|
|
||||||
7. This requirement modifies the requirement in section 4 to
|
|
||||||
"keep intact all notices".
|
|
||||||
|
|
||||||
c) You must license the entire work, as a whole, under this
|
|
||||||
License to anyone who comes into possession of a copy. This
|
|
||||||
License will therefore apply, along with any applicable section 7
|
|
||||||
additional terms, to the whole of the work, and all its parts,
|
|
||||||
regardless of how they are packaged. This License gives no
|
|
||||||
permission to license the work in any other way, but it does not
|
|
||||||
invalidate such permission if you have separately received it.
|
|
||||||
|
|
||||||
d) If the work has interactive user interfaces, each must display
|
|
||||||
Appropriate Legal Notices; however, if the Program has interactive
|
|
||||||
interfaces that do not display Appropriate Legal Notices, your
|
|
||||||
work need not make them do so.
|
|
||||||
|
|
||||||
A compilation of a covered work with other separate and independent
|
|
||||||
works, which are not by their nature extensions of the covered work,
|
|
||||||
and which are not combined with it such as to form a larger program,
|
|
||||||
in or on a volume of a storage or distribution medium, is called an
|
|
||||||
"aggregate" if the compilation and its resulting copyright are not
|
|
||||||
used to limit the access or legal rights of the compilation's users
|
|
||||||
beyond what the individual works permit. Inclusion of a covered work
|
|
||||||
in an aggregate does not cause this License to apply to the other
|
|
||||||
parts of the aggregate.
|
|
||||||
|
|
||||||
6. Conveying Non-Source Forms.
|
|
||||||
|
|
||||||
You may convey a covered work in object code form under the terms
|
|
||||||
of sections 4 and 5, provided that you also convey the
|
|
||||||
machine-readable Corresponding Source under the terms of this License,
|
|
||||||
in one of these ways:
|
|
||||||
|
|
||||||
a) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by the
|
|
||||||
Corresponding Source fixed on a durable physical medium
|
|
||||||
customarily used for software interchange.
|
|
||||||
|
|
||||||
b) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by a
|
|
||||||
written offer, valid for at least three years and valid for as
|
|
||||||
long as you offer spare parts or customer support for that product
|
|
||||||
model, to give anyone who possesses the object code either (1) a
|
|
||||||
copy of the Corresponding Source for all the software in the
|
|
||||||
product that is covered by this License, on a durable physical
|
|
||||||
medium customarily used for software interchange, for a price no
|
|
||||||
more than your reasonable cost of physically performing this
|
|
||||||
conveying of source, or (2) access to copy the
|
|
||||||
Corresponding Source from a network server at no charge.
|
|
||||||
|
|
||||||
c) Convey individual copies of the object code with a copy of the
|
|
||||||
written offer to provide the Corresponding Source. This
|
|
||||||
alternative is allowed only occasionally and noncommercially, and
|
|
||||||
only if you received the object code with such an offer, in accord
|
|
||||||
with subsection 6b.
|
|
||||||
|
|
||||||
d) Convey the object code by offering access from a designated
|
|
||||||
place (gratis or for a charge), and offer equivalent access to the
|
|
||||||
Corresponding Source in the same way through the same place at no
|
|
||||||
further charge. You need not require recipients to copy the
|
|
||||||
Corresponding Source along with the object code. If the place to
|
|
||||||
copy the object code is a network server, the Corresponding Source
|
|
||||||
may be on a different server (operated by you or a third party)
|
|
||||||
that supports equivalent copying facilities, provided you maintain
|
|
||||||
clear directions next to the object code saying where to find the
|
|
||||||
Corresponding Source. Regardless of what server hosts the
|
|
||||||
Corresponding Source, you remain obligated to ensure that it is
|
|
||||||
available for as long as needed to satisfy these requirements.
|
|
||||||
|
|
||||||
e) Convey the object code using peer-to-peer transmission, provided
|
|
||||||
you inform other peers where the object code and Corresponding
|
|
||||||
Source of the work are being offered to the general public at no
|
|
||||||
charge under subsection 6d.
|
|
||||||
|
|
||||||
A separable portion of the object code, whose source code is excluded
|
|
||||||
from the Corresponding Source as a System Library, need not be
|
|
||||||
included in conveying the object code work.
|
|
||||||
|
|
||||||
A "User Product" is either (1) a "consumer product", which means any
|
|
||||||
tangible personal property which is normally used for personal, family,
|
|
||||||
or household purposes, or (2) anything designed or sold for incorporation
|
|
||||||
into a dwelling. In determining whether a product is a consumer product,
|
|
||||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
|
||||||
product received by a particular user, "normally used" refers to a
|
|
||||||
typical or common use of that class of product, regardless of the status
|
|
||||||
of the particular user or of the way in which the particular user
|
|
||||||
actually uses, or expects or is expected to use, the product. A product
|
|
||||||
is a consumer product regardless of whether the product has substantial
|
|
||||||
commercial, industrial or non-consumer uses, unless such uses represent
|
|
||||||
the only significant mode of use of the product.
|
|
||||||
|
|
||||||
"Installation Information" for a User Product means any methods,
|
|
||||||
procedures, authorization keys, or other information required to install
|
|
||||||
and execute modified versions of a covered work in that User Product from
|
|
||||||
a modified version of its Corresponding Source. The information must
|
|
||||||
suffice to ensure that the continued functioning of the modified object
|
|
||||||
code is in no case prevented or interfered with solely because
|
|
||||||
modification has been made.
|
|
||||||
|
|
||||||
If you convey an object code work under this section in, or with, or
|
|
||||||
specifically for use in, a User Product, and the conveying occurs as
|
|
||||||
part of a transaction in which the right of possession and use of the
|
|
||||||
User Product is transferred to the recipient in perpetuity or for a
|
|
||||||
fixed term (regardless of how the transaction is characterized), the
|
|
||||||
Corresponding Source conveyed under this section must be accompanied
|
|
||||||
by the Installation Information. But this requirement does not apply
|
|
||||||
if neither you nor any third party retains the ability to install
|
|
||||||
modified object code on the User Product (for example, the work has
|
|
||||||
been installed in ROM).
|
|
||||||
|
|
||||||
The requirement to provide Installation Information does not include a
|
|
||||||
requirement to continue to provide support service, warranty, or updates
|
|
||||||
for a work that has been modified or installed by the recipient, or for
|
|
||||||
the User Product in which it has been modified or installed. Access to a
|
|
||||||
network may be denied when the modification itself materially and
|
|
||||||
adversely affects the operation of the network or violates the rules and
|
|
||||||
protocols for communication across the network.
|
|
||||||
|
|
||||||
Corresponding Source conveyed, and Installation Information provided,
|
|
||||||
in accord with this section must be in a format that is publicly
|
|
||||||
documented (and with an implementation available to the public in
|
|
||||||
source code form), and must require no special password or key for
|
|
||||||
unpacking, reading or copying.
|
|
||||||
|
|
||||||
7. Additional Terms.
|
|
||||||
|
|
||||||
"Additional permissions" are terms that supplement the terms of this
|
|
||||||
License by making exceptions from one or more of its conditions.
|
|
||||||
Additional permissions that are applicable to the entire Program shall
|
|
||||||
be treated as though they were included in this License, to the extent
|
|
||||||
that they are valid under applicable law. If additional permissions
|
|
||||||
apply only to part of the Program, that part may be used separately
|
|
||||||
under those permissions, but the entire Program remains governed by
|
|
||||||
this License without regard to the additional permissions.
|
|
||||||
|
|
||||||
When you convey a copy of a covered work, you may at your option
|
|
||||||
remove any additional permissions from that copy, or from any part of
|
|
||||||
it. (Additional permissions may be written to require their own
|
|
||||||
removal in certain cases when you modify the work.) You may place
|
|
||||||
additional permissions on material, added by you to a covered work,
|
|
||||||
for which you have or can give appropriate copyright permission.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, for material you
|
|
||||||
add to a covered work, you may (if authorized by the copyright holders of
|
|
||||||
that material) supplement the terms of this License with terms:
|
|
||||||
|
|
||||||
a) Disclaiming warranty or limiting liability differently from the
|
|
||||||
terms of sections 15 and 16 of this License; or
|
|
||||||
|
|
||||||
b) Requiring preservation of specified reasonable legal notices or
|
|
||||||
author attributions in that material or in the Appropriate Legal
|
|
||||||
Notices displayed by works containing it; or
|
|
||||||
|
|
||||||
c) Prohibiting misrepresentation of the origin of that material, or
|
|
||||||
requiring that modified versions of such material be marked in
|
|
||||||
reasonable ways as different from the original version; or
|
|
||||||
|
|
||||||
d) Limiting the use for publicity purposes of names of licensors or
|
|
||||||
authors of the material; or
|
|
||||||
|
|
||||||
e) Declining to grant rights under trademark law for use of some
|
|
||||||
trade names, trademarks, or service marks; or
|
|
||||||
|
|
||||||
f) Requiring indemnification of licensors and authors of that
|
|
||||||
material by anyone who conveys the material (or modified versions of
|
|
||||||
it) with contractual assumptions of liability to the recipient, for
|
|
||||||
any liability that these contractual assumptions directly impose on
|
|
||||||
those licensors and authors.
|
|
||||||
|
|
||||||
All other non-permissive additional terms are considered "further
|
|
||||||
restrictions" within the meaning of section 10. If the Program as you
|
|
||||||
received it, or any part of it, contains a notice stating that it is
|
|
||||||
governed by this License along with a term that is a further
|
|
||||||
restriction, you may remove that term. If a license document contains
|
|
||||||
a further restriction but permits relicensing or conveying under this
|
|
||||||
License, you may add to a covered work material governed by the terms
|
|
||||||
of that license document, provided that the further restriction does
|
|
||||||
not survive such relicensing or conveying.
|
|
||||||
|
|
||||||
If you add terms to a covered work in accord with this section, you
|
|
||||||
must place, in the relevant source files, a statement of the
|
|
||||||
additional terms that apply to those files, or a notice indicating
|
|
||||||
where to find the applicable terms.
|
|
||||||
|
|
||||||
Additional terms, permissive or non-permissive, may be stated in the
|
|
||||||
form of a separately written license, or stated as exceptions;
|
|
||||||
the above requirements apply either way.
|
|
||||||
|
|
||||||
8. Termination.
|
|
||||||
|
|
||||||
You may not propagate or modify a covered work except as expressly
|
|
||||||
provided under this License. Any attempt otherwise to propagate or
|
|
||||||
modify it is void, and will automatically terminate your rights under
|
|
||||||
this License (including any patent licenses granted under the third
|
|
||||||
paragraph of section 11).
|
|
||||||
|
|
||||||
However, if you cease all violation of this License, then your
|
|
||||||
license from a particular copyright holder is reinstated (a)
|
|
||||||
provisionally, unless and until the copyright holder explicitly and
|
|
||||||
finally terminates your license, and (b) permanently, if the copyright
|
|
||||||
holder fails to notify you of the violation by some reasonable means
|
|
||||||
prior to 60 days after the cessation.
|
|
||||||
|
|
||||||
Moreover, your license from a particular copyright holder is
|
|
||||||
reinstated permanently if the copyright holder notifies you of the
|
|
||||||
violation by some reasonable means, this is the first time you have
|
|
||||||
received notice of violation of this License (for any work) from that
|
|
||||||
copyright holder, and you cure the violation prior to 30 days after
|
|
||||||
your receipt of the notice.
|
|
||||||
|
|
||||||
Termination of your rights under this section does not terminate the
|
|
||||||
licenses of parties who have received copies or rights from you under
|
|
||||||
this License. If your rights have been terminated and not permanently
|
|
||||||
reinstated, you do not qualify to receive new licenses for the same
|
|
||||||
material under section 10.
|
|
||||||
|
|
||||||
9. Acceptance Not Required for Having Copies.
|
|
||||||
|
|
||||||
You are not required to accept this License in order to receive or
|
|
||||||
run a copy of the Program. Ancillary propagation of a covered work
|
|
||||||
occurring solely as a consequence of using peer-to-peer transmission
|
|
||||||
to receive a copy likewise does not require acceptance. However,
|
|
||||||
nothing other than this License grants you permission to propagate or
|
|
||||||
modify any covered work. These actions infringe copyright if you do
|
|
||||||
not accept this License. Therefore, by modifying or propagating a
|
|
||||||
covered work, you indicate your acceptance of this License to do so.
|
|
||||||
|
|
||||||
10. Automatic Licensing of Downstream Recipients.
|
|
||||||
|
|
||||||
Each time you convey a covered work, the recipient automatically
|
|
||||||
receives a license from the original licensors, to run, modify and
|
|
||||||
propagate that work, subject to this License. You are not responsible
|
|
||||||
for enforcing compliance by third parties with this License.
|
|
||||||
|
|
||||||
An "entity transaction" is a transaction transferring control of an
|
|
||||||
organization, or substantially all assets of one, or subdividing an
|
|
||||||
organization, or merging organizations. If propagation of a covered
|
|
||||||
work results from an entity transaction, each party to that
|
|
||||||
transaction who receives a copy of the work also receives whatever
|
|
||||||
licenses to the work the party's predecessor in interest had or could
|
|
||||||
give under the previous paragraph, plus a right to possession of the
|
|
||||||
Corresponding Source of the work from the predecessor in interest, if
|
|
||||||
the predecessor has it or can get it with reasonable efforts.
|
|
||||||
|
|
||||||
You may not impose any further restrictions on the exercise of the
|
|
||||||
rights granted or affirmed under this License. For example, you may
|
|
||||||
not impose a license fee, royalty, or other charge for exercise of
|
|
||||||
rights granted under this License, and you may not initiate litigation
|
|
||||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
|
||||||
any patent claim is infringed by making, using, selling, offering for
|
|
||||||
sale, or importing the Program or any portion of it.
|
|
||||||
|
|
||||||
11. Patents.
|
|
||||||
|
|
||||||
A "contributor" is a copyright holder who authorizes use under this
|
|
||||||
License of the Program or a work on which the Program is based. The
|
|
||||||
work thus licensed is called the contributor's "contributor version".
|
|
||||||
|
|
||||||
A contributor's "essential patent claims" are all patent claims
|
|
||||||
owned or controlled by the contributor, whether already acquired or
|
|
||||||
hereafter acquired, that would be infringed by some manner, permitted
|
|
||||||
by this License, of making, using, or selling its contributor version,
|
|
||||||
but do not include claims that would be infringed only as a
|
|
||||||
consequence of further modification of the contributor version. For
|
|
||||||
purposes of this definition, "control" includes the right to grant
|
|
||||||
patent sublicenses in a manner consistent with the requirements of
|
|
||||||
this License.
|
|
||||||
|
|
||||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
|
||||||
patent license under the contributor's essential patent claims, to
|
|
||||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
|
||||||
propagate the contents of its contributor version.
|
|
||||||
|
|
||||||
In the following three paragraphs, a "patent license" is any express
|
|
||||||
agreement or commitment, however denominated, not to enforce a patent
|
|
||||||
(such as an express permission to practice a patent or covenant not to
|
|
||||||
sue for patent infringement). To "grant" such a patent license to a
|
|
||||||
party means to make such an agreement or commitment not to enforce a
|
|
||||||
patent against the party.
|
|
||||||
|
|
||||||
If you convey a covered work, knowingly relying on a patent license,
|
|
||||||
and the Corresponding Source of the work is not available for anyone
|
|
||||||
to copy, free of charge and under the terms of this License, through a
|
|
||||||
publicly available network server or other readily accessible means,
|
|
||||||
then you must either (1) cause the Corresponding Source to be so
|
|
||||||
available, or (2) arrange to deprive yourself of the benefit of the
|
|
||||||
patent license for this particular work, or (3) arrange, in a manner
|
|
||||||
consistent with the requirements of this License, to extend the patent
|
|
||||||
license to downstream recipients. "Knowingly relying" means you have
|
|
||||||
actual knowledge that, but for the patent license, your conveying the
|
|
||||||
covered work in a country, or your recipient's use of the covered work
|
|
||||||
in a country, would infringe one or more identifiable patents in that
|
|
||||||
country that you have reason to believe are valid.
|
|
||||||
|
|
||||||
If, pursuant to or in connection with a single transaction or
|
|
||||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
|
||||||
covered work, and grant a patent license to some of the parties
|
|
||||||
receiving the covered work authorizing them to use, propagate, modify
|
|
||||||
or convey a specific copy of the covered work, then the patent license
|
|
||||||
you grant is automatically extended to all recipients of the covered
|
|
||||||
work and works based on it.
|
|
||||||
|
|
||||||
A patent license is "discriminatory" if it does not include within
|
|
||||||
the scope of its coverage, prohibits the exercise of, or is
|
|
||||||
conditioned on the non-exercise of one or more of the rights that are
|
|
||||||
specifically granted under this License. You may not convey a covered
|
|
||||||
work if you are a party to an arrangement with a third party that is
|
|
||||||
in the business of distributing software, under which you make payment
|
|
||||||
to the third party based on the extent of your activity of conveying
|
|
||||||
the work, and under which the third party grants, to any of the
|
|
||||||
parties who would receive the covered work from you, a discriminatory
|
|
||||||
patent license (a) in connection with copies of the covered work
|
|
||||||
conveyed by you (or copies made from those copies), or (b) primarily
|
|
||||||
for and in connection with specific products or compilations that
|
|
||||||
contain the covered work, unless you entered into that arrangement,
|
|
||||||
or that patent license was granted, prior to 28 March 2007.
|
|
||||||
|
|
||||||
Nothing in this License shall be construed as excluding or limiting
|
|
||||||
any implied license or other defenses to infringement that may
|
|
||||||
otherwise be available to you under applicable patent law.
|
|
||||||
|
|
||||||
12. No Surrender of Others' Freedom.
|
|
||||||
|
|
||||||
If conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot convey a
|
|
||||||
covered work so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you may
|
|
||||||
not convey it at all. For example, if you agree to terms that obligate you
|
|
||||||
to collect a royalty for further conveying from those to whom you convey
|
|
||||||
the Program, the only way you could satisfy both those terms and this
|
|
||||||
License would be to refrain entirely from conveying the Program.
|
|
||||||
|
|
||||||
13. Use with the GNU Affero General Public License.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, you have
|
|
||||||
permission to link or combine any covered work with a work licensed
|
|
||||||
under version 3 of the GNU Affero General Public License into a single
|
|
||||||
combined work, and to convey the resulting work. The terms of this
|
|
||||||
License will continue to apply to the part which is the covered work,
|
|
||||||
but the special requirements of the GNU Affero General Public License,
|
|
||||||
section 13, concerning interaction through a network will apply to the
|
|
||||||
combination as such.
|
|
||||||
|
|
||||||
14. Revised Versions of this License.
|
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions of
|
|
||||||
the GNU General Public License from time to time. Such new versions will
|
|
||||||
be similar in spirit to the present version, but may differ in detail to
|
|
||||||
address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the
|
|
||||||
Program specifies that a certain numbered version of the GNU General
|
|
||||||
Public License "or any later version" applies to it, you have the
|
|
||||||
option of following the terms and conditions either of that numbered
|
|
||||||
version or of any later version published by the Free Software
|
|
||||||
Foundation. If the Program does not specify a version number of the
|
|
||||||
GNU General Public License, you may choose any version ever published
|
|
||||||
by the Free Software Foundation.
|
|
||||||
|
|
||||||
If the Program specifies that a proxy can decide which future
|
|
||||||
versions of the GNU General Public License can be used, that proxy's
|
|
||||||
public statement of acceptance of a version permanently authorizes you
|
|
||||||
to choose that version for the Program.
|
|
||||||
|
|
||||||
Later license versions may give you additional or different
|
|
||||||
permissions. However, no additional obligations are imposed on any
|
|
||||||
author or copyright holder as a result of your choosing to follow a
|
|
||||||
later version.
|
|
||||||
|
|
||||||
15. Disclaimer of Warranty.
|
|
||||||
|
|
||||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
|
||||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
|
||||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
|
||||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
|
||||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
||||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
|
||||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
|
||||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
16. Limitation of Liability.
|
|
||||||
|
|
||||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
|
||||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
|
||||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
|
||||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
|
||||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
|
||||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
|
||||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
|
||||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
|
||||||
SUCH DAMAGES.
|
|
||||||
|
|
||||||
17. Interpretation of Sections 15 and 16.
|
|
||||||
|
|
||||||
If the disclaimer of warranty and limitation of liability provided
|
|
||||||
above cannot be given local legal effect according to their terms,
|
|
||||||
reviewing courts shall apply local law that most closely approximates
|
|
||||||
an absolute waiver of all civil liability in connection with the
|
|
||||||
Program, unless a warranty or assumption of liability accompanies a
|
|
||||||
copy of the Program in return for a fee.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
How to Apply These Terms to Your New Programs
|
|
||||||
|
|
||||||
If you develop a new program, and you want it to be of the greatest
|
|
||||||
possible use to the public, the best way to achieve this is to make it
|
|
||||||
free software which everyone can redistribute and change under these terms.
|
|
||||||
|
|
||||||
To do so, attach the following notices to the program. It is safest
|
|
||||||
to attach them to the start of each source file to most effectively
|
|
||||||
state the exclusion of warranty; and each file should have at least
|
|
||||||
the "copyright" line and a pointer to where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the program's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU General Public License as published by
|
|
||||||
the Free Software Foundation, either version 3 of the License, or
|
|
||||||
(at your option) any later version.
|
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
GNU General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU General Public License
|
|
||||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
If the program does terminal interaction, make it output a short
|
|
||||||
notice like this when it starts in an interactive mode:
|
|
||||||
|
|
||||||
<program> Copyright (C) <year> <name of author>
|
|
||||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
|
||||||
This is free software, and you are welcome to redistribute it
|
|
||||||
under certain conditions; type `show c' for details.
|
|
||||||
|
|
||||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
|
||||||
parts of the General Public License. Of course, your program's commands
|
|
||||||
might be different; for a GUI interface, you would use an "about box".
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or school,
|
|
||||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
|
||||||
For more information on this, and how to apply and follow the GNU GPL, see
|
|
||||||
<http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
The GNU General Public License does not permit incorporating your program
|
|
||||||
into proprietary programs. If your program is a subroutine library, you
|
|
||||||
may consider it more useful to permit linking proprietary applications with
|
|
||||||
the library. If this is what you want to do, use the GNU Lesser General
|
|
||||||
Public License instead of this License. But first, please read
|
|
||||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
|
BIN
DLLs/_asyncio.pyd
Normal file
BIN
DLLs/_asyncio.pyd
Normal file
Binary file not shown.
BIN
DLLs/_bz2.pyd
Normal file
BIN
DLLs/_bz2.pyd
Normal file
Binary file not shown.
BIN
DLLs/_contextvars.pyd
Normal file
BIN
DLLs/_contextvars.pyd
Normal file
Binary file not shown.
BIN
DLLs/_ctypes.pyd
Normal file
BIN
DLLs/_ctypes.pyd
Normal file
Binary file not shown.
BIN
DLLs/_ctypes_test.pyd
Normal file
BIN
DLLs/_ctypes_test.pyd
Normal file
Binary file not shown.
BIN
DLLs/_decimal.pyd
Normal file
BIN
DLLs/_decimal.pyd
Normal file
Binary file not shown.
BIN
DLLs/_elementtree.pyd
Normal file
BIN
DLLs/_elementtree.pyd
Normal file
Binary file not shown.
BIN
DLLs/_hashlib.pyd
Normal file
BIN
DLLs/_hashlib.pyd
Normal file
Binary file not shown.
BIN
DLLs/_lzma.pyd
Normal file
BIN
DLLs/_lzma.pyd
Normal file
Binary file not shown.
BIN
DLLs/_msi.pyd
Normal file
BIN
DLLs/_msi.pyd
Normal file
Binary file not shown.
BIN
DLLs/_multiprocessing.pyd
Normal file
BIN
DLLs/_multiprocessing.pyd
Normal file
Binary file not shown.
BIN
DLLs/_overlapped.pyd
Normal file
BIN
DLLs/_overlapped.pyd
Normal file
Binary file not shown.
BIN
DLLs/_queue.pyd
Normal file
BIN
DLLs/_queue.pyd
Normal file
Binary file not shown.
BIN
DLLs/_socket.pyd
Normal file
BIN
DLLs/_socket.pyd
Normal file
Binary file not shown.
BIN
DLLs/_sqlite3.pyd
Normal file
BIN
DLLs/_sqlite3.pyd
Normal file
Binary file not shown.
BIN
DLLs/_ssl.pyd
Normal file
BIN
DLLs/_ssl.pyd
Normal file
Binary file not shown.
BIN
DLLs/_testbuffer.pyd
Normal file
BIN
DLLs/_testbuffer.pyd
Normal file
Binary file not shown.
BIN
DLLs/_testcapi.pyd
Normal file
BIN
DLLs/_testcapi.pyd
Normal file
Binary file not shown.
BIN
DLLs/_testconsole.pyd
Normal file
BIN
DLLs/_testconsole.pyd
Normal file
Binary file not shown.
BIN
DLLs/_testimportmultiple.pyd
Normal file
BIN
DLLs/_testimportmultiple.pyd
Normal file
Binary file not shown.
BIN
DLLs/_testmultiphase.pyd
Normal file
BIN
DLLs/_testmultiphase.pyd
Normal file
Binary file not shown.
BIN
DLLs/_tkinter.pyd
Normal file
BIN
DLLs/_tkinter.pyd
Normal file
Binary file not shown.
BIN
DLLs/libcrypto-1_1-x64.dll
Normal file
BIN
DLLs/libcrypto-1_1-x64.dll
Normal file
Binary file not shown.
BIN
DLLs/libssl-1_1-x64.dll
Normal file
BIN
DLLs/libssl-1_1-x64.dll
Normal file
Binary file not shown.
BIN
DLLs/py.ico
Normal file
BIN
DLLs/py.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 74 KiB |
BIN
DLLs/pyc.ico
Normal file
BIN
DLLs/pyc.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 77 KiB |
BIN
DLLs/pyd.ico
Normal file
BIN
DLLs/pyd.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 81 KiB |
BIN
DLLs/pyexpat.pyd
Normal file
BIN
DLLs/pyexpat.pyd
Normal file
Binary file not shown.
BIN
DLLs/python_lib.cat
Normal file
BIN
DLLs/python_lib.cat
Normal file
Binary file not shown.
BIN
DLLs/python_tools.cat
Normal file
BIN
DLLs/python_tools.cat
Normal file
Binary file not shown.
BIN
DLLs/select.pyd
Normal file
BIN
DLLs/select.pyd
Normal file
Binary file not shown.
BIN
DLLs/sqlite3.dll
Normal file
BIN
DLLs/sqlite3.dll
Normal file
Binary file not shown.
BIN
DLLs/tcl86t.dll
Normal file
BIN
DLLs/tcl86t.dll
Normal file
Binary file not shown.
BIN
DLLs/tk86t.dll
Normal file
BIN
DLLs/tk86t.dll
Normal file
Binary file not shown.
BIN
DLLs/unicodedata.pyd
Normal file
BIN
DLLs/unicodedata.pyd
Normal file
Binary file not shown.
BIN
DLLs/winsound.pyd
Normal file
BIN
DLLs/winsound.pyd
Normal file
Binary file not shown.
603
LICENSE.txt
Normal file
603
LICENSE.txt
Normal file
|
@ -0,0 +1,603 @@
|
||||||
|
A. HISTORY OF THE SOFTWARE
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Python was created in the early 1990s by Guido van Rossum at Stichting
|
||||||
|
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
|
||||||
|
as a successor of a language called ABC. Guido remains Python's
|
||||||
|
principal author, although it includes many contributions from others.
|
||||||
|
|
||||||
|
In 1995, Guido continued his work on Python at the Corporation for
|
||||||
|
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
|
||||||
|
in Reston, Virginia where he released several versions of the
|
||||||
|
software.
|
||||||
|
|
||||||
|
In May 2000, Guido and the Python core development team moved to
|
||||||
|
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
|
||||||
|
year, the PythonLabs team moved to Digital Creations, which became
|
||||||
|
Zope Corporation. In 2001, the Python Software Foundation (PSF, see
|
||||||
|
https://www.python.org/psf/) was formed, a non-profit organization
|
||||||
|
created specifically to own Python-related Intellectual Property.
|
||||||
|
Zope Corporation was a sponsoring member of the PSF.
|
||||||
|
|
||||||
|
All Python releases are Open Source (see http://www.opensource.org for
|
||||||
|
the Open Source Definition). Historically, most, but not all, Python
|
||||||
|
releases have also been GPL-compatible; the table below summarizes
|
||||||
|
the various releases.
|
||||||
|
|
||||||
|
Release Derived Year Owner GPL-
|
||||||
|
from compatible? (1)
|
||||||
|
|
||||||
|
0.9.0 thru 1.2 1991-1995 CWI yes
|
||||||
|
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
|
||||||
|
1.6 1.5.2 2000 CNRI no
|
||||||
|
2.0 1.6 2000 BeOpen.com no
|
||||||
|
1.6.1 1.6 2001 CNRI yes (2)
|
||||||
|
2.1 2.0+1.6.1 2001 PSF no
|
||||||
|
2.0.1 2.0+1.6.1 2001 PSF yes
|
||||||
|
2.1.1 2.1+2.0.1 2001 PSF yes
|
||||||
|
2.1.2 2.1.1 2002 PSF yes
|
||||||
|
2.1.3 2.1.2 2002 PSF yes
|
||||||
|
2.2 and above 2.1.1 2001-now PSF yes
|
||||||
|
|
||||||
|
Footnotes:
|
||||||
|
|
||||||
|
(1) GPL-compatible doesn't mean that we're distributing Python under
|
||||||
|
the GPL. All Python licenses, unlike the GPL, let you distribute
|
||||||
|
a modified version without making your changes open source. The
|
||||||
|
GPL-compatible licenses make it possible to combine Python with
|
||||||
|
other software that is released under the GPL; the others don't.
|
||||||
|
|
||||||
|
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
|
||||||
|
because its license has a choice of law clause. According to
|
||||||
|
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
|
||||||
|
is "not incompatible" with the GPL.
|
||||||
|
|
||||||
|
Thanks to the many outside volunteers who have worked under Guido's
|
||||||
|
direction to make these releases possible.
|
||||||
|
|
||||||
|
|
||||||
|
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
|
||||||
|
===============================================================
|
||||||
|
|
||||||
|
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
|
||||||
|
--------------------------------------------
|
||||||
|
|
||||||
|
1. This LICENSE AGREEMENT is between the Python Software Foundation
|
||||||
|
("PSF"), and the Individual or Organization ("Licensee") accessing and
|
||||||
|
otherwise using this software ("Python") in source or binary form and
|
||||||
|
its associated documentation.
|
||||||
|
|
||||||
|
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||||
|
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
|
||||||
|
analyze, test, perform and/or display publicly, prepare derivative works,
|
||||||
|
distribute, and otherwise use Python alone or in any derivative version,
|
||||||
|
provided, however, that PSF's License Agreement and PSF's notice of copyright,
|
||||||
|
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
|
||||||
|
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation; All
|
||||||
|
Rights Reserved" are retained in Python alone or in any derivative version
|
||||||
|
prepared by Licensee.
|
||||||
|
|
||||||
|
3. In the event Licensee prepares a derivative work that is based on
|
||||||
|
or incorporates Python or any part thereof, and wants to make
|
||||||
|
the derivative work available to others as provided herein, then
|
||||||
|
Licensee hereby agrees to include in any such work a brief summary of
|
||||||
|
the changes made to Python.
|
||||||
|
|
||||||
|
4. PSF is making Python available to Licensee on an "AS IS"
|
||||||
|
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||||
|
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
|
||||||
|
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||||
|
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
|
||||||
|
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||||
|
|
||||||
|
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||||
|
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||||
|
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
|
||||||
|
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||||
|
|
||||||
|
6. This License Agreement will automatically terminate upon a material
|
||||||
|
breach of its terms and conditions.
|
||||||
|
|
||||||
|
7. Nothing in this License Agreement shall be deemed to create any
|
||||||
|
relationship of agency, partnership, or joint venture between PSF and
|
||||||
|
Licensee. This License Agreement does not grant permission to use PSF
|
||||||
|
trademarks or trade name in a trademark sense to endorse or promote
|
||||||
|
products or services of Licensee, or any third party.
|
||||||
|
|
||||||
|
8. By copying, installing or otherwise using Python, Licensee
|
||||||
|
agrees to be bound by the terms and conditions of this License
|
||||||
|
Agreement.
|
||||||
|
|
||||||
|
|
||||||
|
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
|
||||||
|
-------------------------------------------
|
||||||
|
|
||||||
|
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
|
||||||
|
|
||||||
|
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
|
||||||
|
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
|
||||||
|
Individual or Organization ("Licensee") accessing and otherwise using
|
||||||
|
this software in source or binary form and its associated
|
||||||
|
documentation ("the Software").
|
||||||
|
|
||||||
|
2. Subject to the terms and conditions of this BeOpen Python License
|
||||||
|
Agreement, BeOpen hereby grants Licensee a non-exclusive,
|
||||||
|
royalty-free, world-wide license to reproduce, analyze, test, perform
|
||||||
|
and/or display publicly, prepare derivative works, distribute, and
|
||||||
|
otherwise use the Software alone or in any derivative version,
|
||||||
|
provided, however, that the BeOpen Python License is retained in the
|
||||||
|
Software, alone or in any derivative version prepared by Licensee.
|
||||||
|
|
||||||
|
3. BeOpen is making the Software available to Licensee on an "AS IS"
|
||||||
|
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||||
|
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
|
||||||
|
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||||
|
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
|
||||||
|
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||||
|
|
||||||
|
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
|
||||||
|
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
|
||||||
|
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
|
||||||
|
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||||
|
|
||||||
|
5. This License Agreement will automatically terminate upon a material
|
||||||
|
breach of its terms and conditions.
|
||||||
|
|
||||||
|
6. This License Agreement shall be governed by and interpreted in all
|
||||||
|
respects by the law of the State of California, excluding conflict of
|
||||||
|
law provisions. Nothing in this License Agreement shall be deemed to
|
||||||
|
create any relationship of agency, partnership, or joint venture
|
||||||
|
between BeOpen and Licensee. This License Agreement does not grant
|
||||||
|
permission to use BeOpen trademarks or trade names in a trademark
|
||||||
|
sense to endorse or promote products or services of Licensee, or any
|
||||||
|
third party. As an exception, the "BeOpen Python" logos available at
|
||||||
|
http://www.pythonlabs.com/logos.html may be used according to the
|
||||||
|
permissions granted on that web page.
|
||||||
|
|
||||||
|
7. By copying, installing or otherwise using the software, Licensee
|
||||||
|
agrees to be bound by the terms and conditions of this License
|
||||||
|
Agreement.
|
||||||
|
|
||||||
|
|
||||||
|
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
|
||||||
|
---------------------------------------
|
||||||
|
|
||||||
|
1. This LICENSE AGREEMENT is between the Corporation for National
|
||||||
|
Research Initiatives, having an office at 1895 Preston White Drive,
|
||||||
|
Reston, VA 20191 ("CNRI"), and the Individual or Organization
|
||||||
|
("Licensee") accessing and otherwise using Python 1.6.1 software in
|
||||||
|
source or binary form and its associated documentation.
|
||||||
|
|
||||||
|
2. Subject to the terms and conditions of this License Agreement, CNRI
|
||||||
|
hereby grants Licensee a nonexclusive, royalty-free, world-wide
|
||||||
|
license to reproduce, analyze, test, perform and/or display publicly,
|
||||||
|
prepare derivative works, distribute, and otherwise use Python 1.6.1
|
||||||
|
alone or in any derivative version, provided, however, that CNRI's
|
||||||
|
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
|
||||||
|
1995-2001 Corporation for National Research Initiatives; All Rights
|
||||||
|
Reserved" are retained in Python 1.6.1 alone or in any derivative
|
||||||
|
version prepared by Licensee. Alternately, in lieu of CNRI's License
|
||||||
|
Agreement, Licensee may substitute the following text (omitting the
|
||||||
|
quotes): "Python 1.6.1 is made available subject to the terms and
|
||||||
|
conditions in CNRI's License Agreement. This Agreement together with
|
||||||
|
Python 1.6.1 may be located on the Internet using the following
|
||||||
|
unique, persistent identifier (known as a handle): 1895.22/1013. This
|
||||||
|
Agreement may also be obtained from a proxy server on the Internet
|
||||||
|
using the following URL: http://hdl.handle.net/1895.22/1013".
|
||||||
|
|
||||||
|
3. In the event Licensee prepares a derivative work that is based on
|
||||||
|
or incorporates Python 1.6.1 or any part thereof, and wants to make
|
||||||
|
the derivative work available to others as provided herein, then
|
||||||
|
Licensee hereby agrees to include in any such work a brief summary of
|
||||||
|
the changes made to Python 1.6.1.
|
||||||
|
|
||||||
|
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
|
||||||
|
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||||
|
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
|
||||||
|
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||||
|
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
|
||||||
|
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||||
|
|
||||||
|
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||||
|
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||||
|
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
|
||||||
|
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||||
|
|
||||||
|
6. This License Agreement will automatically terminate upon a material
|
||||||
|
breach of its terms and conditions.
|
||||||
|
|
||||||
|
7. This License Agreement shall be governed by the federal
|
||||||
|
intellectual property law of the United States, including without
|
||||||
|
limitation the federal copyright law, and, to the extent such
|
||||||
|
U.S. federal law does not apply, by the law of the Commonwealth of
|
||||||
|
Virginia, excluding Virginia's conflict of law provisions.
|
||||||
|
Notwithstanding the foregoing, with regard to derivative works based
|
||||||
|
on Python 1.6.1 that incorporate non-separable material that was
|
||||||
|
previously distributed under the GNU General Public License (GPL), the
|
||||||
|
law of the Commonwealth of Virginia shall govern this License
|
||||||
|
Agreement only as to issues arising under or with respect to
|
||||||
|
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
|
||||||
|
License Agreement shall be deemed to create any relationship of
|
||||||
|
agency, partnership, or joint venture between CNRI and Licensee. This
|
||||||
|
License Agreement does not grant permission to use CNRI trademarks or
|
||||||
|
trade name in a trademark sense to endorse or promote products or
|
||||||
|
services of Licensee, or any third party.
|
||||||
|
|
||||||
|
8. By clicking on the "ACCEPT" button where indicated, or by copying,
|
||||||
|
installing or otherwise using Python 1.6.1, Licensee agrees to be
|
||||||
|
bound by the terms and conditions of this License Agreement.
|
||||||
|
|
||||||
|
ACCEPT
|
||||||
|
|
||||||
|
|
||||||
|
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
|
||||||
|
--------------------------------------------------
|
||||||
|
|
||||||
|
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
|
||||||
|
The Netherlands. All rights reserved.
|
||||||
|
|
||||||
|
Permission to use, copy, modify, and distribute this software and its
|
||||||
|
documentation for any purpose and without fee is hereby granted,
|
||||||
|
provided that the above copyright notice appear in all copies and that
|
||||||
|
both that copyright notice and this permission notice appear in
|
||||||
|
supporting documentation, and that the name of Stichting Mathematisch
|
||||||
|
Centrum or CWI not be used in advertising or publicity pertaining to
|
||||||
|
distribution of the software without specific, written prior
|
||||||
|
permission.
|
||||||
|
|
||||||
|
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||||
|
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
|
||||||
|
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
|
||||||
|
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
|
||||||
|
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Additional Conditions for this Windows binary build
|
||||||
|
---------------------------------------------------
|
||||||
|
|
||||||
|
This program is linked with and uses Microsoft Distributable Code,
|
||||||
|
copyrighted by Microsoft Corporation. The Microsoft Distributable Code
|
||||||
|
is embedded in each .exe, .dll and .pyd file as a result of running
|
||||||
|
the code through a linker.
|
||||||
|
|
||||||
|
If you further distribute programs that include the Microsoft
|
||||||
|
Distributable Code, you must comply with the restrictions on
|
||||||
|
distribution specified by Microsoft. In particular, you must require
|
||||||
|
distributors and external end users to agree to terms that protect the
|
||||||
|
Microsoft Distributable Code at least as much as Microsoft's own
|
||||||
|
requirements for the Distributable Code. See Microsoft's documentation
|
||||||
|
(included in its developer tools and on its website at microsoft.com)
|
||||||
|
for specific details.
|
||||||
|
|
||||||
|
Redistribution of the Windows binary build of the Python interpreter
|
||||||
|
complies with this agreement, provided that you do not:
|
||||||
|
|
||||||
|
- alter any copyright, trademark or patent notice in Microsoft's
|
||||||
|
Distributable Code;
|
||||||
|
|
||||||
|
- use Microsoft's trademarks in your programs' names or in a way that
|
||||||
|
suggests your programs come from or are endorsed by Microsoft;
|
||||||
|
|
||||||
|
- distribute Microsoft's Distributable Code to run on a platform other
|
||||||
|
than Microsoft operating systems, run-time technologies or application
|
||||||
|
platforms; or
|
||||||
|
|
||||||
|
- include Microsoft Distributable Code in malicious, deceptive or
|
||||||
|
unlawful programs.
|
||||||
|
|
||||||
|
These restrictions apply only to the Microsoft Distributable Code as
|
||||||
|
defined above, not to Python itself or any programs running on the
|
||||||
|
Python interpreter. The redistribution of the Python interpreter and
|
||||||
|
libraries is governed by the Python Software License included with this
|
||||||
|
file, or by other licenses as marked.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------
|
||||||
|
|
||||||
|
This program, "bzip2", the associated library "libbzip2", and all
|
||||||
|
documentation, are copyright (C) 1996-2010 Julian R Seward. All
|
||||||
|
rights reserved.
|
||||||
|
|
||||||
|
Redistribution and use in source and binary forms, with or without
|
||||||
|
modification, are permitted provided that the following conditions
|
||||||
|
are met:
|
||||||
|
|
||||||
|
1. Redistributions of source code must retain the above copyright
|
||||||
|
notice, this list of conditions and the following disclaimer.
|
||||||
|
|
||||||
|
2. The origin of this software must not be misrepresented; you must
|
||||||
|
not claim that you wrote the original software. If you use this
|
||||||
|
software in a product, an acknowledgment in the product
|
||||||
|
documentation would be appreciated but is not required.
|
||||||
|
|
||||||
|
3. Altered source versions must be plainly marked as such, and must
|
||||||
|
not be misrepresented as being the original software.
|
||||||
|
|
||||||
|
4. The name of the author may not be used to endorse or promote
|
||||||
|
products derived from this software without specific prior written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
|
||||||
|
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||||
|
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
|
||||||
|
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||||
|
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
|
||||||
|
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||||
|
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||||
|
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||||
|
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
Julian Seward, jseward@bzip.org
|
||||||
|
bzip2/libbzip2 version 1.0.6 of 6 September 2010
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
LICENSE ISSUES
|
||||||
|
==============
|
||||||
|
|
||||||
|
The OpenSSL toolkit stays under a double license, i.e. both the conditions of
|
||||||
|
the OpenSSL License and the original SSLeay license apply to the toolkit.
|
||||||
|
See below for the actual license texts.
|
||||||
|
|
||||||
|
OpenSSL License
|
||||||
|
---------------
|
||||||
|
|
||||||
|
/* ====================================================================
|
||||||
|
* Copyright (c) 1998-2018 The OpenSSL Project. All rights reserved.
|
||||||
|
*
|
||||||
|
* Redistribution and use in source and binary forms, with or without
|
||||||
|
* modification, are permitted provided that the following conditions
|
||||||
|
* are met:
|
||||||
|
*
|
||||||
|
* 1. Redistributions of source code must retain the above copyright
|
||||||
|
* notice, this list of conditions and the following disclaimer.
|
||||||
|
*
|
||||||
|
* 2. Redistributions in binary form must reproduce the above copyright
|
||||||
|
* notice, this list of conditions and the following disclaimer in
|
||||||
|
* the documentation and/or other materials provided with the
|
||||||
|
* distribution.
|
||||||
|
*
|
||||||
|
* 3. All advertising materials mentioning features or use of this
|
||||||
|
* software must display the following acknowledgment:
|
||||||
|
* "This product includes software developed by the OpenSSL Project
|
||||||
|
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)"
|
||||||
|
*
|
||||||
|
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
|
||||||
|
* endorse or promote products derived from this software without
|
||||||
|
* prior written permission. For written permission, please contact
|
||||||
|
* openssl-core@openssl.org.
|
||||||
|
*
|
||||||
|
* 5. Products derived from this software may not be called "OpenSSL"
|
||||||
|
* nor may "OpenSSL" appear in their names without prior written
|
||||||
|
* permission of the OpenSSL Project.
|
||||||
|
*
|
||||||
|
* 6. Redistributions of any form whatsoever must retain the following
|
||||||
|
* acknowledgment:
|
||||||
|
* "This product includes software developed by the OpenSSL Project
|
||||||
|
* for use in the OpenSSL Toolkit (http://www.openssl.org/)"
|
||||||
|
*
|
||||||
|
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
|
||||||
|
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||||
|
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
|
||||||
|
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||||
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
||||||
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||||
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||||
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
|
||||||
|
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
|
||||||
|
* OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
* ====================================================================
|
||||||
|
*
|
||||||
|
* This product includes cryptographic software written by Eric Young
|
||||||
|
* (eay@cryptsoft.com). This product includes software written by Tim
|
||||||
|
* Hudson (tjh@cryptsoft.com).
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
|
||||||
|
Original SSLeay License
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com)
|
||||||
|
* All rights reserved.
|
||||||
|
*
|
||||||
|
* This package is an SSL implementation written
|
||||||
|
* by Eric Young (eay@cryptsoft.com).
|
||||||
|
* The implementation was written so as to conform with Netscapes SSL.
|
||||||
|
*
|
||||||
|
* This library is free for commercial and non-commercial use as long as
|
||||||
|
* the following conditions are aheared to. The following conditions
|
||||||
|
* apply to all code found in this distribution, be it the RC4, RSA,
|
||||||
|
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
|
||||||
|
* included with this distribution is covered by the same copyright terms
|
||||||
|
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
|
||||||
|
*
|
||||||
|
* Copyright remains Eric Young's, and as such any Copyright notices in
|
||||||
|
* the code are not to be removed.
|
||||||
|
* If this package is used in a product, Eric Young should be given attribution
|
||||||
|
* as the author of the parts of the library used.
|
||||||
|
* This can be in the form of a textual message at program startup or
|
||||||
|
* in documentation (online or textual) provided with the package.
|
||||||
|
*
|
||||||
|
* Redistribution and use in source and binary forms, with or without
|
||||||
|
* modification, are permitted provided that the following conditions
|
||||||
|
* are met:
|
||||||
|
* 1. Redistributions of source code must retain the copyright
|
||||||
|
* notice, this list of conditions and the following disclaimer.
|
||||||
|
* 2. Redistributions in binary form must reproduce the above copyright
|
||||||
|
* notice, this list of conditions and the following disclaimer in the
|
||||||
|
* documentation and/or other materials provided with the distribution.
|
||||||
|
* 3. All advertising materials mentioning features or use of this software
|
||||||
|
* must display the following acknowledgement:
|
||||||
|
* "This product includes cryptographic software written by
|
||||||
|
* Eric Young (eay@cryptsoft.com)"
|
||||||
|
* The word 'cryptographic' can be left out if the rouines from the library
|
||||||
|
* being used are not cryptographic related :-).
|
||||||
|
* 4. If you include any Windows specific code (or a derivative thereof) from
|
||||||
|
* the apps directory (application code) you must include an acknowledgement:
|
||||||
|
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
|
||||||
|
*
|
||||||
|
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
|
||||||
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
||||||
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||||
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
||||||
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||||
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||||
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||||
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||||
|
* SUCH DAMAGE.
|
||||||
|
*
|
||||||
|
* The licence and distribution terms for any publically available version or
|
||||||
|
* derivative of this code cannot be changed. i.e. this code cannot simply be
|
||||||
|
* copied and put under another distribution licence
|
||||||
|
* [including the GNU Public Licence.]
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
|
This software is copyrighted by the Regents of the University of
|
||||||
|
California, Sun Microsystems, Inc., Scriptics Corporation, ActiveState
|
||||||
|
Corporation and other parties. The following terms apply to all files
|
||||||
|
associated with the software unless explicitly disclaimed in
|
||||||
|
individual files.
|
||||||
|
|
||||||
|
The authors hereby grant permission to use, copy, modify, distribute,
|
||||||
|
and license this software and its documentation for any purpose, provided
|
||||||
|
that existing copyright notices are retained in all copies and that this
|
||||||
|
notice is included verbatim in any distributions. No written agreement,
|
||||||
|
license, or royalty fee is required for any of the authorized uses.
|
||||||
|
Modifications to this software may be copyrighted by their authors
|
||||||
|
and need not follow the licensing terms described here, provided that
|
||||||
|
the new terms are clearly indicated on the first page of each file where
|
||||||
|
they apply.
|
||||||
|
|
||||||
|
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
|
||||||
|
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
|
||||||
|
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
|
||||||
|
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
|
||||||
|
POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
|
||||||
|
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
|
||||||
|
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
|
||||||
|
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
|
||||||
|
MODIFICATIONS.
|
||||||
|
|
||||||
|
GOVERNMENT USE: If you are acquiring this software on behalf of the
|
||||||
|
U.S. government, the Government shall have only "Restricted Rights"
|
||||||
|
in the software and related documentation as defined in the Federal
|
||||||
|
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
|
||||||
|
are acquiring the software on behalf of the Department of Defense, the
|
||||||
|
software shall be classified as "Commercial Computer Software" and the
|
||||||
|
Government shall have only "Restricted Rights" as defined in Clause
|
||||||
|
252.227-7014 (b) (3) of DFARs. Notwithstanding the foregoing, the
|
||||||
|
authors grant the U.S. Government and others acting in its behalf
|
||||||
|
permission to use and distribute the software in accordance with the
|
||||||
|
terms specified in this license.
|
||||||
|
|
||||||
|
This software is copyrighted by the Regents of the University of
|
||||||
|
California, Sun Microsystems, Inc., Scriptics Corporation, ActiveState
|
||||||
|
Corporation, Apple Inc. and other parties. The following terms apply to
|
||||||
|
all files associated with the software unless explicitly disclaimed in
|
||||||
|
individual files.
|
||||||
|
|
||||||
|
The authors hereby grant permission to use, copy, modify, distribute,
|
||||||
|
and license this software and its documentation for any purpose, provided
|
||||||
|
that existing copyright notices are retained in all copies and that this
|
||||||
|
notice is included verbatim in any distributions. No written agreement,
|
||||||
|
license, or royalty fee is required for any of the authorized uses.
|
||||||
|
Modifications to this software may be copyrighted by their authors
|
||||||
|
and need not follow the licensing terms described here, provided that
|
||||||
|
the new terms are clearly indicated on the first page of each file where
|
||||||
|
they apply.
|
||||||
|
|
||||||
|
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
|
||||||
|
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
|
||||||
|
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
|
||||||
|
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
|
||||||
|
POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
|
||||||
|
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
|
||||||
|
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
|
||||||
|
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
|
||||||
|
MODIFICATIONS.
|
||||||
|
|
||||||
|
GOVERNMENT USE: If you are acquiring this software on behalf of the
|
||||||
|
U.S. government, the Government shall have only "Restricted Rights"
|
||||||
|
in the software and related documentation as defined in the Federal
|
||||||
|
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
|
||||||
|
are acquiring the software on behalf of the Department of Defense, the
|
||||||
|
software shall be classified as "Commercial Computer Software" and the
|
||||||
|
Government shall have only "Restricted Rights" as defined in Clause
|
||||||
|
252.227-7013 (b) (3) of DFARs. Notwithstanding the foregoing, the
|
||||||
|
authors grant the U.S. Government and others acting in its behalf
|
||||||
|
permission to use and distribute the software in accordance with the
|
||||||
|
terms specified in this license.
|
||||||
|
|
||||||
|
Copyright (c) 1993-1999 Ioi Kim Lam.
|
||||||
|
Copyright (c) 2000-2001 Tix Project Group.
|
||||||
|
Copyright (c) 2004 ActiveState
|
||||||
|
|
||||||
|
This software is copyrighted by the above entities
|
||||||
|
and other parties. The following terms apply to all files associated
|
||||||
|
with the software unless explicitly disclaimed in individual files.
|
||||||
|
|
||||||
|
The authors hereby grant permission to use, copy, modify, distribute,
|
||||||
|
and license this software and its documentation for any purpose, provided
|
||||||
|
that existing copyright notices are retained in all copies and that this
|
||||||
|
notice is included verbatim in any distributions. No written agreement,
|
||||||
|
license, or royalty fee is required for any of the authorized uses.
|
||||||
|
Modifications to this software may be copyrighted by their authors
|
||||||
|
and need not follow the licensing terms described here, provided that
|
||||||
|
the new terms are clearly indicated on the first page of each file where
|
||||||
|
they apply.
|
||||||
|
|
||||||
|
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
|
||||||
|
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
|
||||||
|
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
|
||||||
|
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
|
||||||
|
POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
|
||||||
|
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
|
||||||
|
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
|
||||||
|
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
|
||||||
|
MODIFICATIONS.
|
||||||
|
|
||||||
|
GOVERNMENT USE: If you are acquiring this software on behalf of the
|
||||||
|
U.S. government, the Government shall have only "Restricted Rights"
|
||||||
|
in the software and related documentation as defined in the Federal
|
||||||
|
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
|
||||||
|
are acquiring the software on behalf of the Department of Defense, the
|
||||||
|
software shall be classified as "Commercial Computer Software" and the
|
||||||
|
Government shall have only "Restricted Rights" as defined in Clause
|
||||||
|
252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the
|
||||||
|
authors grant the U.S. Government and others acting in its behalf
|
||||||
|
permission to use and distribute the software in accordance with the
|
||||||
|
terms specified in this license.
|
||||||
|
|
||||||
|
----------------------------------------------------------------------
|
||||||
|
|
||||||
|
Parts of this software are based on the Tcl/Tk software copyrighted by
|
||||||
|
the Regents of the University of California, Sun Microsystems, Inc.,
|
||||||
|
and other parties. The original license terms of the Tcl/Tk software
|
||||||
|
distribution is included in the file docs/license.tcltk.
|
||||||
|
|
||||||
|
Parts of this software are based on the HTML Library software
|
||||||
|
copyrighted by Sun Microsystems, Inc. The original license terms of
|
||||||
|
the HTML Library software distribution is included in the file
|
||||||
|
docs/license.html_lib.
|
||||||
|
|
146
Lib/__future__.py
Normal file
146
Lib/__future__.py
Normal file
|
@ -0,0 +1,146 @@
|
||||||
|
"""Record of phased-in incompatible language changes.
|
||||||
|
|
||||||
|
Each line is of the form:
|
||||||
|
|
||||||
|
FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease ","
|
||||||
|
CompilerFlag ")"
|
||||||
|
|
||||||
|
where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples
|
||||||
|
of the same form as sys.version_info:
|
||||||
|
|
||||||
|
(PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int
|
||||||
|
PY_MINOR_VERSION, # the 1; an int
|
||||||
|
PY_MICRO_VERSION, # the 0; an int
|
||||||
|
PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string
|
||||||
|
PY_RELEASE_SERIAL # the 3; an int
|
||||||
|
)
|
||||||
|
|
||||||
|
OptionalRelease records the first release in which
|
||||||
|
|
||||||
|
from __future__ import FeatureName
|
||||||
|
|
||||||
|
was accepted.
|
||||||
|
|
||||||
|
In the case of MandatoryReleases that have not yet occurred,
|
||||||
|
MandatoryRelease predicts the release in which the feature will become part
|
||||||
|
of the language.
|
||||||
|
|
||||||
|
Else MandatoryRelease records when the feature became part of the language;
|
||||||
|
in releases at or after that, modules no longer need
|
||||||
|
|
||||||
|
from __future__ import FeatureName
|
||||||
|
|
||||||
|
to use the feature in question, but may continue to use such imports.
|
||||||
|
|
||||||
|
MandatoryRelease may also be None, meaning that a planned feature got
|
||||||
|
dropped.
|
||||||
|
|
||||||
|
Instances of class _Feature have two corresponding methods,
|
||||||
|
.getOptionalRelease() and .getMandatoryRelease().
|
||||||
|
|
||||||
|
CompilerFlag is the (bitfield) flag that should be passed in the fourth
|
||||||
|
argument to the builtin function compile() to enable the feature in
|
||||||
|
dynamically compiled code. This flag is stored in the .compiler_flag
|
||||||
|
attribute on _Future instances. These values must match the appropriate
|
||||||
|
#defines of CO_xxx flags in Include/compile.h.
|
||||||
|
|
||||||
|
No feature line is ever to be deleted from this file.
|
||||||
|
"""
|
||||||
|
|
||||||
|
all_feature_names = [
|
||||||
|
"nested_scopes",
|
||||||
|
"generators",
|
||||||
|
"division",
|
||||||
|
"absolute_import",
|
||||||
|
"with_statement",
|
||||||
|
"print_function",
|
||||||
|
"unicode_literals",
|
||||||
|
"barry_as_FLUFL",
|
||||||
|
"generator_stop",
|
||||||
|
"annotations",
|
||||||
|
]
|
||||||
|
|
||||||
|
__all__ = ["all_feature_names"] + all_feature_names
|
||||||
|
|
||||||
|
# The CO_xxx symbols are defined here under the same names defined in
|
||||||
|
# code.h and used by compile.h, so that an editor search will find them here.
|
||||||
|
# However, they're not exported in __all__, because they don't really belong to
|
||||||
|
# this module.
|
||||||
|
CO_NESTED = 0x0010 # nested_scopes
|
||||||
|
CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000)
|
||||||
|
CO_FUTURE_DIVISION = 0x2000 # division
|
||||||
|
CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default
|
||||||
|
CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement
|
||||||
|
CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function
|
||||||
|
CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals
|
||||||
|
CO_FUTURE_BARRY_AS_BDFL = 0x40000
|
||||||
|
CO_FUTURE_GENERATOR_STOP = 0x80000 # StopIteration becomes RuntimeError in generators
|
||||||
|
CO_FUTURE_ANNOTATIONS = 0x100000 # annotations become strings at runtime
|
||||||
|
|
||||||
|
class _Feature:
|
||||||
|
def __init__(self, optionalRelease, mandatoryRelease, compiler_flag):
|
||||||
|
self.optional = optionalRelease
|
||||||
|
self.mandatory = mandatoryRelease
|
||||||
|
self.compiler_flag = compiler_flag
|
||||||
|
|
||||||
|
def getOptionalRelease(self):
|
||||||
|
"""Return first release in which this feature was recognized.
|
||||||
|
|
||||||
|
This is a 5-tuple, of the same form as sys.version_info.
|
||||||
|
"""
|
||||||
|
|
||||||
|
return self.optional
|
||||||
|
|
||||||
|
def getMandatoryRelease(self):
|
||||||
|
"""Return release in which this feature will become mandatory.
|
||||||
|
|
||||||
|
This is a 5-tuple, of the same form as sys.version_info, or, if
|
||||||
|
the feature was dropped, is None.
|
||||||
|
"""
|
||||||
|
|
||||||
|
return self.mandatory
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "_Feature" + repr((self.optional,
|
||||||
|
self.mandatory,
|
||||||
|
self.compiler_flag))
|
||||||
|
|
||||||
|
nested_scopes = _Feature((2, 1, 0, "beta", 1),
|
||||||
|
(2, 2, 0, "alpha", 0),
|
||||||
|
CO_NESTED)
|
||||||
|
|
||||||
|
generators = _Feature((2, 2, 0, "alpha", 1),
|
||||||
|
(2, 3, 0, "final", 0),
|
||||||
|
CO_GENERATOR_ALLOWED)
|
||||||
|
|
||||||
|
division = _Feature((2, 2, 0, "alpha", 2),
|
||||||
|
(3, 0, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_DIVISION)
|
||||||
|
|
||||||
|
absolute_import = _Feature((2, 5, 0, "alpha", 1),
|
||||||
|
(3, 0, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_ABSOLUTE_IMPORT)
|
||||||
|
|
||||||
|
with_statement = _Feature((2, 5, 0, "alpha", 1),
|
||||||
|
(2, 6, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_WITH_STATEMENT)
|
||||||
|
|
||||||
|
print_function = _Feature((2, 6, 0, "alpha", 2),
|
||||||
|
(3, 0, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_PRINT_FUNCTION)
|
||||||
|
|
||||||
|
unicode_literals = _Feature((2, 6, 0, "alpha", 2),
|
||||||
|
(3, 0, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_UNICODE_LITERALS)
|
||||||
|
|
||||||
|
barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2),
|
||||||
|
(3, 9, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_BARRY_AS_BDFL)
|
||||||
|
|
||||||
|
generator_stop = _Feature((3, 5, 0, "beta", 1),
|
||||||
|
(3, 7, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_GENERATOR_STOP)
|
||||||
|
|
||||||
|
annotations = _Feature((3, 7, 0, "beta", 1),
|
||||||
|
(4, 0, 0, "alpha", 0),
|
||||||
|
CO_FUTURE_ANNOTATIONS)
|
1
Lib/__phello__.foo.py
Normal file
1
Lib/__phello__.foo.py
Normal file
|
@ -0,0 +1 @@
|
||||||
|
# This file exists as a helper for the test.test_frozen module.
|
46
Lib/_bootlocale.py
Normal file
46
Lib/_bootlocale.py
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
"""A minimal subset of the locale module used at interpreter startup
|
||||||
|
(imported by the _io module), in order to reduce startup time.
|
||||||
|
|
||||||
|
Don't import directly from third-party code; use the `locale` module instead!
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import _locale
|
||||||
|
|
||||||
|
if sys.platform.startswith("win"):
|
||||||
|
def getpreferredencoding(do_setlocale=True):
|
||||||
|
if sys.flags.utf8_mode:
|
||||||
|
return 'UTF-8'
|
||||||
|
return _locale._getdefaultlocale()[1]
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
_locale.CODESET
|
||||||
|
except AttributeError:
|
||||||
|
if hasattr(sys, 'getandroidapilevel'):
|
||||||
|
# On Android langinfo.h and CODESET are missing, and UTF-8 is
|
||||||
|
# always used in mbstowcs() and wcstombs().
|
||||||
|
def getpreferredencoding(do_setlocale=True):
|
||||||
|
return 'UTF-8'
|
||||||
|
else:
|
||||||
|
def getpreferredencoding(do_setlocale=True):
|
||||||
|
if sys.flags.utf8_mode:
|
||||||
|
return 'UTF-8'
|
||||||
|
# This path for legacy systems needs the more complex
|
||||||
|
# getdefaultlocale() function, import the full locale module.
|
||||||
|
import locale
|
||||||
|
return locale.getpreferredencoding(do_setlocale)
|
||||||
|
else:
|
||||||
|
def getpreferredencoding(do_setlocale=True):
|
||||||
|
assert not do_setlocale
|
||||||
|
if sys.flags.utf8_mode:
|
||||||
|
return 'UTF-8'
|
||||||
|
result = _locale.nl_langinfo(_locale.CODESET)
|
||||||
|
if not result and sys.platform == 'darwin':
|
||||||
|
# nl_langinfo can return an empty string
|
||||||
|
# when the setting has an invalid value.
|
||||||
|
# Default to UTF-8 in that case because
|
||||||
|
# UTF-8 is the default charset on OSX and
|
||||||
|
# returning nothing will crash the
|
||||||
|
# interpreter.
|
||||||
|
result = 'UTF-8'
|
||||||
|
return result
|
1011
Lib/_collections_abc.py
Normal file
1011
Lib/_collections_abc.py
Normal file
File diff suppressed because it is too large
Load diff
251
Lib/_compat_pickle.py
Normal file
251
Lib/_compat_pickle.py
Normal file
|
@ -0,0 +1,251 @@
|
||||||
|
# This module is used to map the old Python 2 names to the new names used in
|
||||||
|
# Python 3 for the pickle module. This needed to make pickle streams
|
||||||
|
# generated with Python 2 loadable by Python 3.
|
||||||
|
|
||||||
|
# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import
|
||||||
|
# lib2to3 and use the mapping defined there, because lib2to3 uses pickle.
|
||||||
|
# Thus, this could cause the module to be imported recursively.
|
||||||
|
IMPORT_MAPPING = {
|
||||||
|
'__builtin__' : 'builtins',
|
||||||
|
'copy_reg': 'copyreg',
|
||||||
|
'Queue': 'queue',
|
||||||
|
'SocketServer': 'socketserver',
|
||||||
|
'ConfigParser': 'configparser',
|
||||||
|
'repr': 'reprlib',
|
||||||
|
'tkFileDialog': 'tkinter.filedialog',
|
||||||
|
'tkSimpleDialog': 'tkinter.simpledialog',
|
||||||
|
'tkColorChooser': 'tkinter.colorchooser',
|
||||||
|
'tkCommonDialog': 'tkinter.commondialog',
|
||||||
|
'Dialog': 'tkinter.dialog',
|
||||||
|
'Tkdnd': 'tkinter.dnd',
|
||||||
|
'tkFont': 'tkinter.font',
|
||||||
|
'tkMessageBox': 'tkinter.messagebox',
|
||||||
|
'ScrolledText': 'tkinter.scrolledtext',
|
||||||
|
'Tkconstants': 'tkinter.constants',
|
||||||
|
'Tix': 'tkinter.tix',
|
||||||
|
'ttk': 'tkinter.ttk',
|
||||||
|
'Tkinter': 'tkinter',
|
||||||
|
'markupbase': '_markupbase',
|
||||||
|
'_winreg': 'winreg',
|
||||||
|
'thread': '_thread',
|
||||||
|
'dummy_thread': '_dummy_thread',
|
||||||
|
'dbhash': 'dbm.bsd',
|
||||||
|
'dumbdbm': 'dbm.dumb',
|
||||||
|
'dbm': 'dbm.ndbm',
|
||||||
|
'gdbm': 'dbm.gnu',
|
||||||
|
'xmlrpclib': 'xmlrpc.client',
|
||||||
|
'SimpleXMLRPCServer': 'xmlrpc.server',
|
||||||
|
'httplib': 'http.client',
|
||||||
|
'htmlentitydefs' : 'html.entities',
|
||||||
|
'HTMLParser' : 'html.parser',
|
||||||
|
'Cookie': 'http.cookies',
|
||||||
|
'cookielib': 'http.cookiejar',
|
||||||
|
'BaseHTTPServer': 'http.server',
|
||||||
|
'test.test_support': 'test.support',
|
||||||
|
'commands': 'subprocess',
|
||||||
|
'urlparse' : 'urllib.parse',
|
||||||
|
'robotparser' : 'urllib.robotparser',
|
||||||
|
'urllib2': 'urllib.request',
|
||||||
|
'anydbm': 'dbm',
|
||||||
|
'_abcoll' : 'collections.abc',
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# This contains rename rules that are easy to handle. We ignore the more
|
||||||
|
# complex stuff (e.g. mapping the names in the urllib and types modules).
|
||||||
|
# These rules should be run before import names are fixed.
|
||||||
|
NAME_MAPPING = {
|
||||||
|
('__builtin__', 'xrange'): ('builtins', 'range'),
|
||||||
|
('__builtin__', 'reduce'): ('functools', 'reduce'),
|
||||||
|
('__builtin__', 'intern'): ('sys', 'intern'),
|
||||||
|
('__builtin__', 'unichr'): ('builtins', 'chr'),
|
||||||
|
('__builtin__', 'unicode'): ('builtins', 'str'),
|
||||||
|
('__builtin__', 'long'): ('builtins', 'int'),
|
||||||
|
('itertools', 'izip'): ('builtins', 'zip'),
|
||||||
|
('itertools', 'imap'): ('builtins', 'map'),
|
||||||
|
('itertools', 'ifilter'): ('builtins', 'filter'),
|
||||||
|
('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'),
|
||||||
|
('itertools', 'izip_longest'): ('itertools', 'zip_longest'),
|
||||||
|
('UserDict', 'IterableUserDict'): ('collections', 'UserDict'),
|
||||||
|
('UserList', 'UserList'): ('collections', 'UserList'),
|
||||||
|
('UserString', 'UserString'): ('collections', 'UserString'),
|
||||||
|
('whichdb', 'whichdb'): ('dbm', 'whichdb'),
|
||||||
|
('_socket', 'fromfd'): ('socket', 'fromfd'),
|
||||||
|
('_multiprocessing', 'Connection'): ('multiprocessing.connection', 'Connection'),
|
||||||
|
('multiprocessing.process', 'Process'): ('multiprocessing.context', 'Process'),
|
||||||
|
('multiprocessing.forking', 'Popen'): ('multiprocessing.popen_fork', 'Popen'),
|
||||||
|
('urllib', 'ContentTooShortError'): ('urllib.error', 'ContentTooShortError'),
|
||||||
|
('urllib', 'getproxies'): ('urllib.request', 'getproxies'),
|
||||||
|
('urllib', 'pathname2url'): ('urllib.request', 'pathname2url'),
|
||||||
|
('urllib', 'quote_plus'): ('urllib.parse', 'quote_plus'),
|
||||||
|
('urllib', 'quote'): ('urllib.parse', 'quote'),
|
||||||
|
('urllib', 'unquote_plus'): ('urllib.parse', 'unquote_plus'),
|
||||||
|
('urllib', 'unquote'): ('urllib.parse', 'unquote'),
|
||||||
|
('urllib', 'url2pathname'): ('urllib.request', 'url2pathname'),
|
||||||
|
('urllib', 'urlcleanup'): ('urllib.request', 'urlcleanup'),
|
||||||
|
('urllib', 'urlencode'): ('urllib.parse', 'urlencode'),
|
||||||
|
('urllib', 'urlopen'): ('urllib.request', 'urlopen'),
|
||||||
|
('urllib', 'urlretrieve'): ('urllib.request', 'urlretrieve'),
|
||||||
|
('urllib2', 'HTTPError'): ('urllib.error', 'HTTPError'),
|
||||||
|
('urllib2', 'URLError'): ('urllib.error', 'URLError'),
|
||||||
|
}
|
||||||
|
|
||||||
|
PYTHON2_EXCEPTIONS = (
|
||||||
|
"ArithmeticError",
|
||||||
|
"AssertionError",
|
||||||
|
"AttributeError",
|
||||||
|
"BaseException",
|
||||||
|
"BufferError",
|
||||||
|
"BytesWarning",
|
||||||
|
"DeprecationWarning",
|
||||||
|
"EOFError",
|
||||||
|
"EnvironmentError",
|
||||||
|
"Exception",
|
||||||
|
"FloatingPointError",
|
||||||
|
"FutureWarning",
|
||||||
|
"GeneratorExit",
|
||||||
|
"IOError",
|
||||||
|
"ImportError",
|
||||||
|
"ImportWarning",
|
||||||
|
"IndentationError",
|
||||||
|
"IndexError",
|
||||||
|
"KeyError",
|
||||||
|
"KeyboardInterrupt",
|
||||||
|
"LookupError",
|
||||||
|
"MemoryError",
|
||||||
|
"NameError",
|
||||||
|
"NotImplementedError",
|
||||||
|
"OSError",
|
||||||
|
"OverflowError",
|
||||||
|
"PendingDeprecationWarning",
|
||||||
|
"ReferenceError",
|
||||||
|
"RuntimeError",
|
||||||
|
"RuntimeWarning",
|
||||||
|
# StandardError is gone in Python 3, so we map it to Exception
|
||||||
|
"StopIteration",
|
||||||
|
"SyntaxError",
|
||||||
|
"SyntaxWarning",
|
||||||
|
"SystemError",
|
||||||
|
"SystemExit",
|
||||||
|
"TabError",
|
||||||
|
"TypeError",
|
||||||
|
"UnboundLocalError",
|
||||||
|
"UnicodeDecodeError",
|
||||||
|
"UnicodeEncodeError",
|
||||||
|
"UnicodeError",
|
||||||
|
"UnicodeTranslateError",
|
||||||
|
"UnicodeWarning",
|
||||||
|
"UserWarning",
|
||||||
|
"ValueError",
|
||||||
|
"Warning",
|
||||||
|
"ZeroDivisionError",
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
WindowsError
|
||||||
|
except NameError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
PYTHON2_EXCEPTIONS += ("WindowsError",)
|
||||||
|
|
||||||
|
for excname in PYTHON2_EXCEPTIONS:
|
||||||
|
NAME_MAPPING[("exceptions", excname)] = ("builtins", excname)
|
||||||
|
|
||||||
|
MULTIPROCESSING_EXCEPTIONS = (
|
||||||
|
'AuthenticationError',
|
||||||
|
'BufferTooShort',
|
||||||
|
'ProcessError',
|
||||||
|
'TimeoutError',
|
||||||
|
)
|
||||||
|
|
||||||
|
for excname in MULTIPROCESSING_EXCEPTIONS:
|
||||||
|
NAME_MAPPING[("multiprocessing", excname)] = ("multiprocessing.context", excname)
|
||||||
|
|
||||||
|
# Same, but for 3.x to 2.x
|
||||||
|
REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items())
|
||||||
|
assert len(REVERSE_IMPORT_MAPPING) == len(IMPORT_MAPPING)
|
||||||
|
REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items())
|
||||||
|
assert len(REVERSE_NAME_MAPPING) == len(NAME_MAPPING)
|
||||||
|
|
||||||
|
# Non-mutual mappings.
|
||||||
|
|
||||||
|
IMPORT_MAPPING.update({
|
||||||
|
'cPickle': 'pickle',
|
||||||
|
'_elementtree': 'xml.etree.ElementTree',
|
||||||
|
'FileDialog': 'tkinter.filedialog',
|
||||||
|
'SimpleDialog': 'tkinter.simpledialog',
|
||||||
|
'DocXMLRPCServer': 'xmlrpc.server',
|
||||||
|
'SimpleHTTPServer': 'http.server',
|
||||||
|
'CGIHTTPServer': 'http.server',
|
||||||
|
# For compatibility with broken pickles saved in old Python 3 versions
|
||||||
|
'UserDict': 'collections',
|
||||||
|
'UserList': 'collections',
|
||||||
|
'UserString': 'collections',
|
||||||
|
'whichdb': 'dbm',
|
||||||
|
'StringIO': 'io',
|
||||||
|
'cStringIO': 'io',
|
||||||
|
})
|
||||||
|
|
||||||
|
REVERSE_IMPORT_MAPPING.update({
|
||||||
|
'_bz2': 'bz2',
|
||||||
|
'_dbm': 'dbm',
|
||||||
|
'_functools': 'functools',
|
||||||
|
'_gdbm': 'gdbm',
|
||||||
|
'_pickle': 'pickle',
|
||||||
|
})
|
||||||
|
|
||||||
|
NAME_MAPPING.update({
|
||||||
|
('__builtin__', 'basestring'): ('builtins', 'str'),
|
||||||
|
('exceptions', 'StandardError'): ('builtins', 'Exception'),
|
||||||
|
('UserDict', 'UserDict'): ('collections', 'UserDict'),
|
||||||
|
('socket', '_socketobject'): ('socket', 'SocketType'),
|
||||||
|
})
|
||||||
|
|
||||||
|
REVERSE_NAME_MAPPING.update({
|
||||||
|
('_functools', 'reduce'): ('__builtin__', 'reduce'),
|
||||||
|
('tkinter.filedialog', 'FileDialog'): ('FileDialog', 'FileDialog'),
|
||||||
|
('tkinter.filedialog', 'LoadFileDialog'): ('FileDialog', 'LoadFileDialog'),
|
||||||
|
('tkinter.filedialog', 'SaveFileDialog'): ('FileDialog', 'SaveFileDialog'),
|
||||||
|
('tkinter.simpledialog', 'SimpleDialog'): ('SimpleDialog', 'SimpleDialog'),
|
||||||
|
('xmlrpc.server', 'ServerHTMLDoc'): ('DocXMLRPCServer', 'ServerHTMLDoc'),
|
||||||
|
('xmlrpc.server', 'XMLRPCDocGenerator'):
|
||||||
|
('DocXMLRPCServer', 'XMLRPCDocGenerator'),
|
||||||
|
('xmlrpc.server', 'DocXMLRPCRequestHandler'):
|
||||||
|
('DocXMLRPCServer', 'DocXMLRPCRequestHandler'),
|
||||||
|
('xmlrpc.server', 'DocXMLRPCServer'):
|
||||||
|
('DocXMLRPCServer', 'DocXMLRPCServer'),
|
||||||
|
('xmlrpc.server', 'DocCGIXMLRPCRequestHandler'):
|
||||||
|
('DocXMLRPCServer', 'DocCGIXMLRPCRequestHandler'),
|
||||||
|
('http.server', 'SimpleHTTPRequestHandler'):
|
||||||
|
('SimpleHTTPServer', 'SimpleHTTPRequestHandler'),
|
||||||
|
('http.server', 'CGIHTTPRequestHandler'):
|
||||||
|
('CGIHTTPServer', 'CGIHTTPRequestHandler'),
|
||||||
|
('_socket', 'socket'): ('socket', '_socketobject'),
|
||||||
|
})
|
||||||
|
|
||||||
|
PYTHON3_OSERROR_EXCEPTIONS = (
|
||||||
|
'BrokenPipeError',
|
||||||
|
'ChildProcessError',
|
||||||
|
'ConnectionAbortedError',
|
||||||
|
'ConnectionError',
|
||||||
|
'ConnectionRefusedError',
|
||||||
|
'ConnectionResetError',
|
||||||
|
'FileExistsError',
|
||||||
|
'FileNotFoundError',
|
||||||
|
'InterruptedError',
|
||||||
|
'IsADirectoryError',
|
||||||
|
'NotADirectoryError',
|
||||||
|
'PermissionError',
|
||||||
|
'ProcessLookupError',
|
||||||
|
'TimeoutError',
|
||||||
|
)
|
||||||
|
|
||||||
|
for excname in PYTHON3_OSERROR_EXCEPTIONS:
|
||||||
|
REVERSE_NAME_MAPPING[('builtins', excname)] = ('exceptions', 'OSError')
|
||||||
|
|
||||||
|
PYTHON3_IMPORTERROR_EXCEPTIONS = (
|
||||||
|
'ModuleNotFoundError',
|
||||||
|
)
|
||||||
|
|
||||||
|
for excname in PYTHON3_IMPORTERROR_EXCEPTIONS:
|
||||||
|
REVERSE_NAME_MAPPING[('builtins', excname)] = ('exceptions', 'ImportError')
|
152
Lib/_compression.py
Normal file
152
Lib/_compression.py
Normal file
|
@ -0,0 +1,152 @@
|
||||||
|
"""Internal classes used by the gzip, lzma and bz2 modules"""
|
||||||
|
|
||||||
|
import io
|
||||||
|
|
||||||
|
|
||||||
|
BUFFER_SIZE = io.DEFAULT_BUFFER_SIZE # Compressed data read chunk size
|
||||||
|
|
||||||
|
|
||||||
|
class BaseStream(io.BufferedIOBase):
|
||||||
|
"""Mode-checking helper functions."""
|
||||||
|
|
||||||
|
def _check_not_closed(self):
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
|
||||||
|
def _check_can_read(self):
|
||||||
|
if not self.readable():
|
||||||
|
raise io.UnsupportedOperation("File not open for reading")
|
||||||
|
|
||||||
|
def _check_can_write(self):
|
||||||
|
if not self.writable():
|
||||||
|
raise io.UnsupportedOperation("File not open for writing")
|
||||||
|
|
||||||
|
def _check_can_seek(self):
|
||||||
|
if not self.readable():
|
||||||
|
raise io.UnsupportedOperation("Seeking is only supported "
|
||||||
|
"on files open for reading")
|
||||||
|
if not self.seekable():
|
||||||
|
raise io.UnsupportedOperation("The underlying file object "
|
||||||
|
"does not support seeking")
|
||||||
|
|
||||||
|
|
||||||
|
class DecompressReader(io.RawIOBase):
|
||||||
|
"""Adapts the decompressor API to a RawIOBase reader API"""
|
||||||
|
|
||||||
|
def readable(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __init__(self, fp, decomp_factory, trailing_error=(), **decomp_args):
|
||||||
|
self._fp = fp
|
||||||
|
self._eof = False
|
||||||
|
self._pos = 0 # Current offset in decompressed stream
|
||||||
|
|
||||||
|
# Set to size of decompressed stream once it is known, for SEEK_END
|
||||||
|
self._size = -1
|
||||||
|
|
||||||
|
# Save the decompressor factory and arguments.
|
||||||
|
# If the file contains multiple compressed streams, each
|
||||||
|
# stream will need a separate decompressor object. A new decompressor
|
||||||
|
# object is also needed when implementing a backwards seek().
|
||||||
|
self._decomp_factory = decomp_factory
|
||||||
|
self._decomp_args = decomp_args
|
||||||
|
self._decompressor = self._decomp_factory(**self._decomp_args)
|
||||||
|
|
||||||
|
# Exception class to catch from decompressor signifying invalid
|
||||||
|
# trailing data to ignore
|
||||||
|
self._trailing_error = trailing_error
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self._decompressor = None
|
||||||
|
return super().close()
|
||||||
|
|
||||||
|
def seekable(self):
|
||||||
|
return self._fp.seekable()
|
||||||
|
|
||||||
|
def readinto(self, b):
|
||||||
|
with memoryview(b) as view, view.cast("B") as byte_view:
|
||||||
|
data = self.read(len(byte_view))
|
||||||
|
byte_view[:len(data)] = data
|
||||||
|
return len(data)
|
||||||
|
|
||||||
|
def read(self, size=-1):
|
||||||
|
if size < 0:
|
||||||
|
return self.readall()
|
||||||
|
|
||||||
|
if not size or self._eof:
|
||||||
|
return b""
|
||||||
|
data = None # Default if EOF is encountered
|
||||||
|
# Depending on the input data, our call to the decompressor may not
|
||||||
|
# return any data. In this case, try again after reading another block.
|
||||||
|
while True:
|
||||||
|
if self._decompressor.eof:
|
||||||
|
rawblock = (self._decompressor.unused_data or
|
||||||
|
self._fp.read(BUFFER_SIZE))
|
||||||
|
if not rawblock:
|
||||||
|
break
|
||||||
|
# Continue to next stream.
|
||||||
|
self._decompressor = self._decomp_factory(
|
||||||
|
**self._decomp_args)
|
||||||
|
try:
|
||||||
|
data = self._decompressor.decompress(rawblock, size)
|
||||||
|
except self._trailing_error:
|
||||||
|
# Trailing data isn't a valid compressed stream; ignore it.
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
if self._decompressor.needs_input:
|
||||||
|
rawblock = self._fp.read(BUFFER_SIZE)
|
||||||
|
if not rawblock:
|
||||||
|
raise EOFError("Compressed file ended before the "
|
||||||
|
"end-of-stream marker was reached")
|
||||||
|
else:
|
||||||
|
rawblock = b""
|
||||||
|
data = self._decompressor.decompress(rawblock, size)
|
||||||
|
if data:
|
||||||
|
break
|
||||||
|
if not data:
|
||||||
|
self._eof = True
|
||||||
|
self._size = self._pos
|
||||||
|
return b""
|
||||||
|
self._pos += len(data)
|
||||||
|
return data
|
||||||
|
|
||||||
|
# Rewind the file to the beginning of the data stream.
|
||||||
|
def _rewind(self):
|
||||||
|
self._fp.seek(0)
|
||||||
|
self._eof = False
|
||||||
|
self._pos = 0
|
||||||
|
self._decompressor = self._decomp_factory(**self._decomp_args)
|
||||||
|
|
||||||
|
def seek(self, offset, whence=io.SEEK_SET):
|
||||||
|
# Recalculate offset as an absolute file position.
|
||||||
|
if whence == io.SEEK_SET:
|
||||||
|
pass
|
||||||
|
elif whence == io.SEEK_CUR:
|
||||||
|
offset = self._pos + offset
|
||||||
|
elif whence == io.SEEK_END:
|
||||||
|
# Seeking relative to EOF - we need to know the file's size.
|
||||||
|
if self._size < 0:
|
||||||
|
while self.read(io.DEFAULT_BUFFER_SIZE):
|
||||||
|
pass
|
||||||
|
offset = self._size + offset
|
||||||
|
else:
|
||||||
|
raise ValueError("Invalid value for whence: {}".format(whence))
|
||||||
|
|
||||||
|
# Make it so that offset is the number of bytes to skip forward.
|
||||||
|
if offset < self._pos:
|
||||||
|
self._rewind()
|
||||||
|
else:
|
||||||
|
offset -= self._pos
|
||||||
|
|
||||||
|
# Read and discard data until we reach the desired position.
|
||||||
|
while offset > 0:
|
||||||
|
data = self.read(min(io.DEFAULT_BUFFER_SIZE, offset))
|
||||||
|
if not data:
|
||||||
|
break
|
||||||
|
offset -= len(data)
|
||||||
|
|
||||||
|
return self._pos
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
"""Return the current file position."""
|
||||||
|
return self._pos
|
163
Lib/_dummy_thread.py
Normal file
163
Lib/_dummy_thread.py
Normal file
|
@ -0,0 +1,163 @@
|
||||||
|
"""Drop-in replacement for the thread module.
|
||||||
|
|
||||||
|
Meant to be used as a brain-dead substitute so that threaded code does
|
||||||
|
not need to be rewritten for when the thread module is not present.
|
||||||
|
|
||||||
|
Suggested usage is::
|
||||||
|
|
||||||
|
try:
|
||||||
|
import _thread
|
||||||
|
except ImportError:
|
||||||
|
import _dummy_thread as _thread
|
||||||
|
|
||||||
|
"""
|
||||||
|
# Exports only things specified by thread documentation;
|
||||||
|
# skipping obsolete synonyms allocate(), start_new(), exit_thread().
|
||||||
|
__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock',
|
||||||
|
'interrupt_main', 'LockType']
|
||||||
|
|
||||||
|
# A dummy value
|
||||||
|
TIMEOUT_MAX = 2**31
|
||||||
|
|
||||||
|
# NOTE: this module can be imported early in the extension building process,
|
||||||
|
# and so top level imports of other modules should be avoided. Instead, all
|
||||||
|
# imports are done when needed on a function-by-function basis. Since threads
|
||||||
|
# are disabled, the import lock should not be an issue anyway (??).
|
||||||
|
|
||||||
|
error = RuntimeError
|
||||||
|
|
||||||
|
def start_new_thread(function, args, kwargs={}):
|
||||||
|
"""Dummy implementation of _thread.start_new_thread().
|
||||||
|
|
||||||
|
Compatibility is maintained by making sure that ``args`` is a
|
||||||
|
tuple and ``kwargs`` is a dictionary. If an exception is raised
|
||||||
|
and it is SystemExit (which can be done by _thread.exit()) it is
|
||||||
|
caught and nothing is done; all other exceptions are printed out
|
||||||
|
by using traceback.print_exc().
|
||||||
|
|
||||||
|
If the executed function calls interrupt_main the KeyboardInterrupt will be
|
||||||
|
raised when the function returns.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if type(args) != type(tuple()):
|
||||||
|
raise TypeError("2nd arg must be a tuple")
|
||||||
|
if type(kwargs) != type(dict()):
|
||||||
|
raise TypeError("3rd arg must be a dict")
|
||||||
|
global _main
|
||||||
|
_main = False
|
||||||
|
try:
|
||||||
|
function(*args, **kwargs)
|
||||||
|
except SystemExit:
|
||||||
|
pass
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
_main = True
|
||||||
|
global _interrupt
|
||||||
|
if _interrupt:
|
||||||
|
_interrupt = False
|
||||||
|
raise KeyboardInterrupt
|
||||||
|
|
||||||
|
def exit():
|
||||||
|
"""Dummy implementation of _thread.exit()."""
|
||||||
|
raise SystemExit
|
||||||
|
|
||||||
|
def get_ident():
|
||||||
|
"""Dummy implementation of _thread.get_ident().
|
||||||
|
|
||||||
|
Since this module should only be used when _threadmodule is not
|
||||||
|
available, it is safe to assume that the current process is the
|
||||||
|
only thread. Thus a constant can be safely returned.
|
||||||
|
"""
|
||||||
|
return 1
|
||||||
|
|
||||||
|
def allocate_lock():
|
||||||
|
"""Dummy implementation of _thread.allocate_lock()."""
|
||||||
|
return LockType()
|
||||||
|
|
||||||
|
def stack_size(size=None):
|
||||||
|
"""Dummy implementation of _thread.stack_size()."""
|
||||||
|
if size is not None:
|
||||||
|
raise error("setting thread stack size not supported")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
def _set_sentinel():
|
||||||
|
"""Dummy implementation of _thread._set_sentinel()."""
|
||||||
|
return LockType()
|
||||||
|
|
||||||
|
class LockType(object):
|
||||||
|
"""Class implementing dummy implementation of _thread.LockType.
|
||||||
|
|
||||||
|
Compatibility is maintained by maintaining self.locked_status
|
||||||
|
which is a boolean that stores the state of the lock. Pickling of
|
||||||
|
the lock, though, should not be done since if the _thread module is
|
||||||
|
then used with an unpickled ``lock()`` from here problems could
|
||||||
|
occur from this class not having atomic methods.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.locked_status = False
|
||||||
|
|
||||||
|
def acquire(self, waitflag=None, timeout=-1):
|
||||||
|
"""Dummy implementation of acquire().
|
||||||
|
|
||||||
|
For blocking calls, self.locked_status is automatically set to
|
||||||
|
True and returned appropriately based on value of
|
||||||
|
``waitflag``. If it is non-blocking, then the value is
|
||||||
|
actually checked and not set if it is already acquired. This
|
||||||
|
is all done so that threading.Condition's assert statements
|
||||||
|
aren't triggered and throw a little fit.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if waitflag is None or waitflag:
|
||||||
|
self.locked_status = True
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
if not self.locked_status:
|
||||||
|
self.locked_status = True
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
if timeout > 0:
|
||||||
|
import time
|
||||||
|
time.sleep(timeout)
|
||||||
|
return False
|
||||||
|
|
||||||
|
__enter__ = acquire
|
||||||
|
|
||||||
|
def __exit__(self, typ, val, tb):
|
||||||
|
self.release()
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
"""Release the dummy lock."""
|
||||||
|
# XXX Perhaps shouldn't actually bother to test? Could lead
|
||||||
|
# to problems for complex, threaded code.
|
||||||
|
if not self.locked_status:
|
||||||
|
raise error
|
||||||
|
self.locked_status = False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def locked(self):
|
||||||
|
return self.locked_status
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "<%s %s.%s object at %s>" % (
|
||||||
|
"locked" if self.locked_status else "unlocked",
|
||||||
|
self.__class__.__module__,
|
||||||
|
self.__class__.__qualname__,
|
||||||
|
hex(id(self))
|
||||||
|
)
|
||||||
|
|
||||||
|
# Used to signal that interrupt_main was called in a "thread"
|
||||||
|
_interrupt = False
|
||||||
|
# True when not executing in a "thread"
|
||||||
|
_main = True
|
||||||
|
|
||||||
|
def interrupt_main():
|
||||||
|
"""Set _interrupt flag to True to have start_new_thread raise
|
||||||
|
KeyboardInterrupt upon exiting."""
|
||||||
|
if _main:
|
||||||
|
raise KeyboardInterrupt
|
||||||
|
else:
|
||||||
|
global _interrupt
|
||||||
|
_interrupt = True
|
395
Lib/_markupbase.py
Normal file
395
Lib/_markupbase.py
Normal file
|
@ -0,0 +1,395 @@
|
||||||
|
"""Shared support for scanning document type declarations in HTML and XHTML.
|
||||||
|
|
||||||
|
This module is used as a foundation for the html.parser module. It has no
|
||||||
|
documented public API and should not be used directly.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
import re
|
||||||
|
|
||||||
|
_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match
|
||||||
|
_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match
|
||||||
|
_commentclose = re.compile(r'--\s*>')
|
||||||
|
_markedsectionclose = re.compile(r']\s*]\s*>')
|
||||||
|
|
||||||
|
# An analysis of the MS-Word extensions is available at
|
||||||
|
# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf
|
||||||
|
|
||||||
|
_msmarkedsectionclose = re.compile(r']\s*>')
|
||||||
|
|
||||||
|
del re
|
||||||
|
|
||||||
|
|
||||||
|
class ParserBase:
|
||||||
|
"""Parser base class which provides some common support methods used
|
||||||
|
by the SGML/HTML and XHTML parsers."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
if self.__class__ is ParserBase:
|
||||||
|
raise RuntimeError(
|
||||||
|
"_markupbase.ParserBase must be subclassed")
|
||||||
|
|
||||||
|
def error(self, message):
|
||||||
|
raise NotImplementedError(
|
||||||
|
"subclasses of ParserBase must override error()")
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.lineno = 1
|
||||||
|
self.offset = 0
|
||||||
|
|
||||||
|
def getpos(self):
|
||||||
|
"""Return current line number and offset."""
|
||||||
|
return self.lineno, self.offset
|
||||||
|
|
||||||
|
# Internal -- update line number and offset. This should be
|
||||||
|
# called for each piece of data exactly once, in order -- in other
|
||||||
|
# words the concatenation of all the input strings to this
|
||||||
|
# function should be exactly the entire input.
|
||||||
|
def updatepos(self, i, j):
|
||||||
|
if i >= j:
|
||||||
|
return j
|
||||||
|
rawdata = self.rawdata
|
||||||
|
nlines = rawdata.count("\n", i, j)
|
||||||
|
if nlines:
|
||||||
|
self.lineno = self.lineno + nlines
|
||||||
|
pos = rawdata.rindex("\n", i, j) # Should not fail
|
||||||
|
self.offset = j-(pos+1)
|
||||||
|
else:
|
||||||
|
self.offset = self.offset + j-i
|
||||||
|
return j
|
||||||
|
|
||||||
|
_decl_otherchars = ''
|
||||||
|
|
||||||
|
# Internal -- parse declaration (for use by subclasses).
|
||||||
|
def parse_declaration(self, i):
|
||||||
|
# This is some sort of declaration; in "HTML as
|
||||||
|
# deployed," this should only be the document type
|
||||||
|
# declaration ("<!DOCTYPE html...>").
|
||||||
|
# ISO 8879:1986, however, has more complex
|
||||||
|
# declaration syntax for elements in <!...>, including:
|
||||||
|
# --comment--
|
||||||
|
# [marked section]
|
||||||
|
# name in the following list: ENTITY, DOCTYPE, ELEMENT,
|
||||||
|
# ATTLIST, NOTATION, SHORTREF, USEMAP,
|
||||||
|
# LINKTYPE, LINK, IDLINK, USELINK, SYSTEM
|
||||||
|
rawdata = self.rawdata
|
||||||
|
j = i + 2
|
||||||
|
assert rawdata[i:j] == "<!", "unexpected call to parse_declaration"
|
||||||
|
if rawdata[j:j+1] == ">":
|
||||||
|
# the empty comment <!>
|
||||||
|
return j + 1
|
||||||
|
if rawdata[j:j+1] in ("-", ""):
|
||||||
|
# Start of comment followed by buffer boundary,
|
||||||
|
# or just a buffer boundary.
|
||||||
|
return -1
|
||||||
|
# A simple, practical version could look like: ((name|stringlit) S*) + '>'
|
||||||
|
n = len(rawdata)
|
||||||
|
if rawdata[j:j+2] == '--': #comment
|
||||||
|
# Locate --.*-- as the body of the comment
|
||||||
|
return self.parse_comment(i)
|
||||||
|
elif rawdata[j] == '[': #marked section
|
||||||
|
# Locate [statusWord [...arbitrary SGML...]] as the body of the marked section
|
||||||
|
# Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA
|
||||||
|
# Note that this is extended by Microsoft Office "Save as Web" function
|
||||||
|
# to include [if...] and [endif].
|
||||||
|
return self.parse_marked_section(i)
|
||||||
|
else: #all other declaration elements
|
||||||
|
decltype, j = self._scan_name(j, i)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
if decltype == "doctype":
|
||||||
|
self._decl_otherchars = ''
|
||||||
|
while j < n:
|
||||||
|
c = rawdata[j]
|
||||||
|
if c == ">":
|
||||||
|
# end of declaration syntax
|
||||||
|
data = rawdata[i+2:j]
|
||||||
|
if decltype == "doctype":
|
||||||
|
self.handle_decl(data)
|
||||||
|
else:
|
||||||
|
# According to the HTML5 specs sections "8.2.4.44 Bogus
|
||||||
|
# comment state" and "8.2.4.45 Markup declaration open
|
||||||
|
# state", a comment token should be emitted.
|
||||||
|
# Calling unknown_decl provides more flexibility though.
|
||||||
|
self.unknown_decl(data)
|
||||||
|
return j + 1
|
||||||
|
if c in "\"'":
|
||||||
|
m = _declstringlit_match(rawdata, j)
|
||||||
|
if not m:
|
||||||
|
return -1 # incomplete
|
||||||
|
j = m.end()
|
||||||
|
elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ":
|
||||||
|
name, j = self._scan_name(j, i)
|
||||||
|
elif c in self._decl_otherchars:
|
||||||
|
j = j + 1
|
||||||
|
elif c == "[":
|
||||||
|
# this could be handled in a separate doctype parser
|
||||||
|
if decltype == "doctype":
|
||||||
|
j = self._parse_doctype_subset(j + 1, i)
|
||||||
|
elif decltype in {"attlist", "linktype", "link", "element"}:
|
||||||
|
# must tolerate []'d groups in a content model in an element declaration
|
||||||
|
# also in data attribute specifications of attlist declaration
|
||||||
|
# also link type declaration subsets in linktype declarations
|
||||||
|
# also link attribute specification lists in link declarations
|
||||||
|
self.error("unsupported '[' char in %s declaration" % decltype)
|
||||||
|
else:
|
||||||
|
self.error("unexpected '[' char in declaration")
|
||||||
|
else:
|
||||||
|
self.error(
|
||||||
|
"unexpected %r char in declaration" % rawdata[j])
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
return -1 # incomplete
|
||||||
|
|
||||||
|
# Internal -- parse a marked section
|
||||||
|
# Override this to handle MS-word extension syntax <![if word]>content<![endif]>
|
||||||
|
def parse_marked_section(self, i, report=1):
|
||||||
|
rawdata= self.rawdata
|
||||||
|
assert rawdata[i:i+3] == '<![', "unexpected call to parse_marked_section()"
|
||||||
|
sectName, j = self._scan_name( i+3, i )
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
if sectName in {"temp", "cdata", "ignore", "include", "rcdata"}:
|
||||||
|
# look for standard ]]> ending
|
||||||
|
match= _markedsectionclose.search(rawdata, i+3)
|
||||||
|
elif sectName in {"if", "else", "endif"}:
|
||||||
|
# look for MS Office ]> ending
|
||||||
|
match= _msmarkedsectionclose.search(rawdata, i+3)
|
||||||
|
else:
|
||||||
|
self.error('unknown status keyword %r in marked section' % rawdata[i+3:j])
|
||||||
|
if not match:
|
||||||
|
return -1
|
||||||
|
if report:
|
||||||
|
j = match.start(0)
|
||||||
|
self.unknown_decl(rawdata[i+3: j])
|
||||||
|
return match.end(0)
|
||||||
|
|
||||||
|
# Internal -- parse comment, return length or -1 if not terminated
|
||||||
|
def parse_comment(self, i, report=1):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
if rawdata[i:i+4] != '<!--':
|
||||||
|
self.error('unexpected call to parse_comment()')
|
||||||
|
match = _commentclose.search(rawdata, i+4)
|
||||||
|
if not match:
|
||||||
|
return -1
|
||||||
|
if report:
|
||||||
|
j = match.start(0)
|
||||||
|
self.handle_comment(rawdata[i+4: j])
|
||||||
|
return match.end(0)
|
||||||
|
|
||||||
|
# Internal -- scan past the internal subset in a <!DOCTYPE declaration,
|
||||||
|
# returning the index just past any whitespace following the trailing ']'.
|
||||||
|
def _parse_doctype_subset(self, i, declstartpos):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
n = len(rawdata)
|
||||||
|
j = i
|
||||||
|
while j < n:
|
||||||
|
c = rawdata[j]
|
||||||
|
if c == "<":
|
||||||
|
s = rawdata[j:j+2]
|
||||||
|
if s == "<":
|
||||||
|
# end of buffer; incomplete
|
||||||
|
return -1
|
||||||
|
if s != "<!":
|
||||||
|
self.updatepos(declstartpos, j + 1)
|
||||||
|
self.error("unexpected char in internal subset (in %r)" % s)
|
||||||
|
if (j + 2) == n:
|
||||||
|
# end of buffer; incomplete
|
||||||
|
return -1
|
||||||
|
if (j + 4) > n:
|
||||||
|
# end of buffer; incomplete
|
||||||
|
return -1
|
||||||
|
if rawdata[j:j+4] == "<!--":
|
||||||
|
j = self.parse_comment(j, report=0)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
continue
|
||||||
|
name, j = self._scan_name(j + 2, declstartpos)
|
||||||
|
if j == -1:
|
||||||
|
return -1
|
||||||
|
if name not in {"attlist", "element", "entity", "notation"}:
|
||||||
|
self.updatepos(declstartpos, j + 2)
|
||||||
|
self.error(
|
||||||
|
"unknown declaration %r in internal subset" % name)
|
||||||
|
# handle the individual names
|
||||||
|
meth = getattr(self, "_parse_doctype_" + name)
|
||||||
|
j = meth(j, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
elif c == "%":
|
||||||
|
# parameter entity reference
|
||||||
|
if (j + 1) == n:
|
||||||
|
# end of buffer; incomplete
|
||||||
|
return -1
|
||||||
|
s, j = self._scan_name(j + 1, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
if rawdata[j] == ";":
|
||||||
|
j = j + 1
|
||||||
|
elif c == "]":
|
||||||
|
j = j + 1
|
||||||
|
while j < n and rawdata[j].isspace():
|
||||||
|
j = j + 1
|
||||||
|
if j < n:
|
||||||
|
if rawdata[j] == ">":
|
||||||
|
return j
|
||||||
|
self.updatepos(declstartpos, j)
|
||||||
|
self.error("unexpected char after internal subset")
|
||||||
|
else:
|
||||||
|
return -1
|
||||||
|
elif c.isspace():
|
||||||
|
j = j + 1
|
||||||
|
else:
|
||||||
|
self.updatepos(declstartpos, j)
|
||||||
|
self.error("unexpected char %r in internal subset" % c)
|
||||||
|
# end of buffer reached
|
||||||
|
return -1
|
||||||
|
|
||||||
|
# Internal -- scan past <!ELEMENT declarations
|
||||||
|
def _parse_doctype_element(self, i, declstartpos):
|
||||||
|
name, j = self._scan_name(i, declstartpos)
|
||||||
|
if j == -1:
|
||||||
|
return -1
|
||||||
|
# style content model; just skip until '>'
|
||||||
|
rawdata = self.rawdata
|
||||||
|
if '>' in rawdata[j:]:
|
||||||
|
return rawdata.find(">", j) + 1
|
||||||
|
return -1
|
||||||
|
|
||||||
|
# Internal -- scan past <!ATTLIST declarations
|
||||||
|
def _parse_doctype_attlist(self, i, declstartpos):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
name, j = self._scan_name(i, declstartpos)
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if c == "":
|
||||||
|
return -1
|
||||||
|
if c == ">":
|
||||||
|
return j + 1
|
||||||
|
while 1:
|
||||||
|
# scan a series of attribute descriptions; simplified:
|
||||||
|
# name type [value] [#constraint]
|
||||||
|
name, j = self._scan_name(j, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if c == "":
|
||||||
|
return -1
|
||||||
|
if c == "(":
|
||||||
|
# an enumerated type; look for ')'
|
||||||
|
if ")" in rawdata[j:]:
|
||||||
|
j = rawdata.find(")", j) + 1
|
||||||
|
else:
|
||||||
|
return -1
|
||||||
|
while rawdata[j:j+1].isspace():
|
||||||
|
j = j + 1
|
||||||
|
if not rawdata[j:]:
|
||||||
|
# end of buffer, incomplete
|
||||||
|
return -1
|
||||||
|
else:
|
||||||
|
name, j = self._scan_name(j, declstartpos)
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
return -1
|
||||||
|
if c in "'\"":
|
||||||
|
m = _declstringlit_match(rawdata, j)
|
||||||
|
if m:
|
||||||
|
j = m.end()
|
||||||
|
else:
|
||||||
|
return -1
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
return -1
|
||||||
|
if c == "#":
|
||||||
|
if rawdata[j:] == "#":
|
||||||
|
# end of buffer
|
||||||
|
return -1
|
||||||
|
name, j = self._scan_name(j + 1, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
return -1
|
||||||
|
if c == '>':
|
||||||
|
# all done
|
||||||
|
return j + 1
|
||||||
|
|
||||||
|
# Internal -- scan past <!NOTATION declarations
|
||||||
|
def _parse_doctype_notation(self, i, declstartpos):
|
||||||
|
name, j = self._scan_name(i, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
rawdata = self.rawdata
|
||||||
|
while 1:
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
# end of buffer; incomplete
|
||||||
|
return -1
|
||||||
|
if c == '>':
|
||||||
|
return j + 1
|
||||||
|
if c in "'\"":
|
||||||
|
m = _declstringlit_match(rawdata, j)
|
||||||
|
if not m:
|
||||||
|
return -1
|
||||||
|
j = m.end()
|
||||||
|
else:
|
||||||
|
name, j = self._scan_name(j, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
|
||||||
|
# Internal -- scan past <!ENTITY declarations
|
||||||
|
def _parse_doctype_entity(self, i, declstartpos):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
if rawdata[i:i+1] == "%":
|
||||||
|
j = i + 1
|
||||||
|
while 1:
|
||||||
|
c = rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
return -1
|
||||||
|
if c.isspace():
|
||||||
|
j = j + 1
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
j = i
|
||||||
|
name, j = self._scan_name(j, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
while 1:
|
||||||
|
c = self.rawdata[j:j+1]
|
||||||
|
if not c:
|
||||||
|
return -1
|
||||||
|
if c in "'\"":
|
||||||
|
m = _declstringlit_match(rawdata, j)
|
||||||
|
if m:
|
||||||
|
j = m.end()
|
||||||
|
else:
|
||||||
|
return -1 # incomplete
|
||||||
|
elif c == ">":
|
||||||
|
return j + 1
|
||||||
|
else:
|
||||||
|
name, j = self._scan_name(j, declstartpos)
|
||||||
|
if j < 0:
|
||||||
|
return j
|
||||||
|
|
||||||
|
# Internal -- scan a name token and the new position and the token, or
|
||||||
|
# return -1 if we've reached the end of the buffer.
|
||||||
|
def _scan_name(self, i, declstartpos):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
n = len(rawdata)
|
||||||
|
if i == n:
|
||||||
|
return None, -1
|
||||||
|
m = _declname_match(rawdata, i)
|
||||||
|
if m:
|
||||||
|
s = m.group()
|
||||||
|
name = s.strip()
|
||||||
|
if (i + len(s)) == n:
|
||||||
|
return None, -1 # end of buffer
|
||||||
|
return name.lower(), m.end()
|
||||||
|
else:
|
||||||
|
self.updatepos(declstartpos, i)
|
||||||
|
self.error("expected name token at %r"
|
||||||
|
% rawdata[declstartpos:declstartpos+20])
|
||||||
|
|
||||||
|
# To be overridden -- handlers for unknown objects
|
||||||
|
def unknown_decl(self, data):
|
||||||
|
pass
|
502
Lib/_osx_support.py
Normal file
502
Lib/_osx_support.py
Normal file
|
@ -0,0 +1,502 @@
|
||||||
|
"""Shared OS X support functions."""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'compiler_fixup',
|
||||||
|
'customize_config_vars',
|
||||||
|
'customize_compiler',
|
||||||
|
'get_platform_osx',
|
||||||
|
]
|
||||||
|
|
||||||
|
# configuration variables that may contain universal build flags,
|
||||||
|
# like "-arch" or "-isdkroot", that may need customization for
|
||||||
|
# the user environment
|
||||||
|
_UNIVERSAL_CONFIG_VARS = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS', 'BASECFLAGS',
|
||||||
|
'BLDSHARED', 'LDSHARED', 'CC', 'CXX',
|
||||||
|
'PY_CFLAGS', 'PY_LDFLAGS', 'PY_CPPFLAGS',
|
||||||
|
'PY_CORE_CFLAGS', 'PY_CORE_LDFLAGS')
|
||||||
|
|
||||||
|
# configuration variables that may contain compiler calls
|
||||||
|
_COMPILER_CONFIG_VARS = ('BLDSHARED', 'LDSHARED', 'CC', 'CXX')
|
||||||
|
|
||||||
|
# prefix added to original configuration variable names
|
||||||
|
_INITPRE = '_OSX_SUPPORT_INITIAL_'
|
||||||
|
|
||||||
|
|
||||||
|
def _find_executable(executable, path=None):
|
||||||
|
"""Tries to find 'executable' in the directories listed in 'path'.
|
||||||
|
|
||||||
|
A string listing directories separated by 'os.pathsep'; defaults to
|
||||||
|
os.environ['PATH']. Returns the complete filename or None if not found.
|
||||||
|
"""
|
||||||
|
if path is None:
|
||||||
|
path = os.environ['PATH']
|
||||||
|
|
||||||
|
paths = path.split(os.pathsep)
|
||||||
|
base, ext = os.path.splitext(executable)
|
||||||
|
|
||||||
|
if (sys.platform == 'win32') and (ext != '.exe'):
|
||||||
|
executable = executable + '.exe'
|
||||||
|
|
||||||
|
if not os.path.isfile(executable):
|
||||||
|
for p in paths:
|
||||||
|
f = os.path.join(p, executable)
|
||||||
|
if os.path.isfile(f):
|
||||||
|
# the file exists, we have a shot at spawn working
|
||||||
|
return f
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return executable
|
||||||
|
|
||||||
|
|
||||||
|
def _read_output(commandstring):
|
||||||
|
"""Output from successful command execution or None"""
|
||||||
|
# Similar to os.popen(commandstring, "r").read(),
|
||||||
|
# but without actually using os.popen because that
|
||||||
|
# function is not usable during python bootstrap.
|
||||||
|
# tempfile is also not available then.
|
||||||
|
import contextlib
|
||||||
|
try:
|
||||||
|
import tempfile
|
||||||
|
fp = tempfile.NamedTemporaryFile()
|
||||||
|
except ImportError:
|
||||||
|
fp = open("/tmp/_osx_support.%s"%(
|
||||||
|
os.getpid(),), "w+b")
|
||||||
|
|
||||||
|
with contextlib.closing(fp) as fp:
|
||||||
|
cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name)
|
||||||
|
return fp.read().decode('utf-8').strip() if not os.system(cmd) else None
|
||||||
|
|
||||||
|
|
||||||
|
def _find_build_tool(toolname):
|
||||||
|
"""Find a build tool on current path or using xcrun"""
|
||||||
|
return (_find_executable(toolname)
|
||||||
|
or _read_output("/usr/bin/xcrun -find %s" % (toolname,))
|
||||||
|
or ''
|
||||||
|
)
|
||||||
|
|
||||||
|
_SYSTEM_VERSION = None
|
||||||
|
|
||||||
|
def _get_system_version():
|
||||||
|
"""Return the OS X system version as a string"""
|
||||||
|
# Reading this plist is a documented way to get the system
|
||||||
|
# version (see the documentation for the Gestalt Manager)
|
||||||
|
# We avoid using platform.mac_ver to avoid possible bootstrap issues during
|
||||||
|
# the build of Python itself (distutils is used to build standard library
|
||||||
|
# extensions).
|
||||||
|
|
||||||
|
global _SYSTEM_VERSION
|
||||||
|
|
||||||
|
if _SYSTEM_VERSION is None:
|
||||||
|
_SYSTEM_VERSION = ''
|
||||||
|
try:
|
||||||
|
f = open('/System/Library/CoreServices/SystemVersion.plist')
|
||||||
|
except OSError:
|
||||||
|
# We're on a plain darwin box, fall back to the default
|
||||||
|
# behaviour.
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
m = re.search(r'<key>ProductUserVisibleVersion</key>\s*'
|
||||||
|
r'<string>(.*?)</string>', f.read())
|
||||||
|
finally:
|
||||||
|
f.close()
|
||||||
|
if m is not None:
|
||||||
|
_SYSTEM_VERSION = '.'.join(m.group(1).split('.')[:2])
|
||||||
|
# else: fall back to the default behaviour
|
||||||
|
|
||||||
|
return _SYSTEM_VERSION
|
||||||
|
|
||||||
|
def _remove_original_values(_config_vars):
|
||||||
|
"""Remove original unmodified values for testing"""
|
||||||
|
# This is needed for higher-level cross-platform tests of get_platform.
|
||||||
|
for k in list(_config_vars):
|
||||||
|
if k.startswith(_INITPRE):
|
||||||
|
del _config_vars[k]
|
||||||
|
|
||||||
|
def _save_modified_value(_config_vars, cv, newvalue):
|
||||||
|
"""Save modified and original unmodified value of configuration var"""
|
||||||
|
|
||||||
|
oldvalue = _config_vars.get(cv, '')
|
||||||
|
if (oldvalue != newvalue) and (_INITPRE + cv not in _config_vars):
|
||||||
|
_config_vars[_INITPRE + cv] = oldvalue
|
||||||
|
_config_vars[cv] = newvalue
|
||||||
|
|
||||||
|
def _supports_universal_builds():
|
||||||
|
"""Returns True if universal builds are supported on this system"""
|
||||||
|
# As an approximation, we assume that if we are running on 10.4 or above,
|
||||||
|
# then we are running with an Xcode environment that supports universal
|
||||||
|
# builds, in particular -isysroot and -arch arguments to the compiler. This
|
||||||
|
# is in support of allowing 10.4 universal builds to run on 10.3.x systems.
|
||||||
|
|
||||||
|
osx_version = _get_system_version()
|
||||||
|
if osx_version:
|
||||||
|
try:
|
||||||
|
osx_version = tuple(int(i) for i in osx_version.split('.'))
|
||||||
|
except ValueError:
|
||||||
|
osx_version = ''
|
||||||
|
return bool(osx_version >= (10, 4)) if osx_version else False
|
||||||
|
|
||||||
|
|
||||||
|
def _find_appropriate_compiler(_config_vars):
|
||||||
|
"""Find appropriate C compiler for extension module builds"""
|
||||||
|
|
||||||
|
# Issue #13590:
|
||||||
|
# The OSX location for the compiler varies between OSX
|
||||||
|
# (or rather Xcode) releases. With older releases (up-to 10.5)
|
||||||
|
# the compiler is in /usr/bin, with newer releases the compiler
|
||||||
|
# can only be found inside Xcode.app if the "Command Line Tools"
|
||||||
|
# are not installed.
|
||||||
|
#
|
||||||
|
# Furthermore, the compiler that can be used varies between
|
||||||
|
# Xcode releases. Up to Xcode 4 it was possible to use 'gcc-4.2'
|
||||||
|
# as the compiler, after that 'clang' should be used because
|
||||||
|
# gcc-4.2 is either not present, or a copy of 'llvm-gcc' that
|
||||||
|
# miscompiles Python.
|
||||||
|
|
||||||
|
# skip checks if the compiler was overridden with a CC env variable
|
||||||
|
if 'CC' in os.environ:
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
# The CC config var might contain additional arguments.
|
||||||
|
# Ignore them while searching.
|
||||||
|
cc = oldcc = _config_vars['CC'].split()[0]
|
||||||
|
if not _find_executable(cc):
|
||||||
|
# Compiler is not found on the shell search PATH.
|
||||||
|
# Now search for clang, first on PATH (if the Command LIne
|
||||||
|
# Tools have been installed in / or if the user has provided
|
||||||
|
# another location via CC). If not found, try using xcrun
|
||||||
|
# to find an uninstalled clang (within a selected Xcode).
|
||||||
|
|
||||||
|
# NOTE: Cannot use subprocess here because of bootstrap
|
||||||
|
# issues when building Python itself (and os.popen is
|
||||||
|
# implemented on top of subprocess and is therefore not
|
||||||
|
# usable as well)
|
||||||
|
|
||||||
|
cc = _find_build_tool('clang')
|
||||||
|
|
||||||
|
elif os.path.basename(cc).startswith('gcc'):
|
||||||
|
# Compiler is GCC, check if it is LLVM-GCC
|
||||||
|
data = _read_output("'%s' --version"
|
||||||
|
% (cc.replace("'", "'\"'\"'"),))
|
||||||
|
if data and 'llvm-gcc' in data:
|
||||||
|
# Found LLVM-GCC, fall back to clang
|
||||||
|
cc = _find_build_tool('clang')
|
||||||
|
|
||||||
|
if not cc:
|
||||||
|
raise SystemError(
|
||||||
|
"Cannot locate working compiler")
|
||||||
|
|
||||||
|
if cc != oldcc:
|
||||||
|
# Found a replacement compiler.
|
||||||
|
# Modify config vars using new compiler, if not already explicitly
|
||||||
|
# overridden by an env variable, preserving additional arguments.
|
||||||
|
for cv in _COMPILER_CONFIG_VARS:
|
||||||
|
if cv in _config_vars and cv not in os.environ:
|
||||||
|
cv_split = _config_vars[cv].split()
|
||||||
|
cv_split[0] = cc if cv != 'CXX' else cc + '++'
|
||||||
|
_save_modified_value(_config_vars, cv, ' '.join(cv_split))
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def _remove_universal_flags(_config_vars):
|
||||||
|
"""Remove all universal build arguments from config vars"""
|
||||||
|
|
||||||
|
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||||
|
# Do not alter a config var explicitly overridden by env var
|
||||||
|
if cv in _config_vars and cv not in os.environ:
|
||||||
|
flags = _config_vars[cv]
|
||||||
|
flags = re.sub(r'-arch\s+\w+\s', ' ', flags, flags=re.ASCII)
|
||||||
|
flags = re.sub('-isysroot [^ \t]*', ' ', flags)
|
||||||
|
_save_modified_value(_config_vars, cv, flags)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def _remove_unsupported_archs(_config_vars):
|
||||||
|
"""Remove any unsupported archs from config vars"""
|
||||||
|
# Different Xcode releases support different sets for '-arch'
|
||||||
|
# flags. In particular, Xcode 4.x no longer supports the
|
||||||
|
# PPC architectures.
|
||||||
|
#
|
||||||
|
# This code automatically removes '-arch ppc' and '-arch ppc64'
|
||||||
|
# when these are not supported. That makes it possible to
|
||||||
|
# build extensions on OSX 10.7 and later with the prebuilt
|
||||||
|
# 32-bit installer on the python.org website.
|
||||||
|
|
||||||
|
# skip checks if the compiler was overridden with a CC env variable
|
||||||
|
if 'CC' in os.environ:
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
if re.search(r'-arch\s+ppc', _config_vars['CFLAGS']) is not None:
|
||||||
|
# NOTE: Cannot use subprocess here because of bootstrap
|
||||||
|
# issues when building Python itself
|
||||||
|
status = os.system(
|
||||||
|
"""echo 'int main{};' | """
|
||||||
|
"""'%s' -c -arch ppc -x c -o /dev/null /dev/null 2>/dev/null"""
|
||||||
|
%(_config_vars['CC'].replace("'", "'\"'\"'"),))
|
||||||
|
if status:
|
||||||
|
# The compile failed for some reason. Because of differences
|
||||||
|
# across Xcode and compiler versions, there is no reliable way
|
||||||
|
# to be sure why it failed. Assume here it was due to lack of
|
||||||
|
# PPC support and remove the related '-arch' flags from each
|
||||||
|
# config variables not explicitly overridden by an environment
|
||||||
|
# variable. If the error was for some other reason, we hope the
|
||||||
|
# failure will show up again when trying to compile an extension
|
||||||
|
# module.
|
||||||
|
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||||
|
if cv in _config_vars and cv not in os.environ:
|
||||||
|
flags = _config_vars[cv]
|
||||||
|
flags = re.sub(r'-arch\s+ppc\w*\s', ' ', flags)
|
||||||
|
_save_modified_value(_config_vars, cv, flags)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def _override_all_archs(_config_vars):
|
||||||
|
"""Allow override of all archs with ARCHFLAGS env var"""
|
||||||
|
# NOTE: This name was introduced by Apple in OSX 10.5 and
|
||||||
|
# is used by several scripting languages distributed with
|
||||||
|
# that OS release.
|
||||||
|
if 'ARCHFLAGS' in os.environ:
|
||||||
|
arch = os.environ['ARCHFLAGS']
|
||||||
|
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||||
|
if cv in _config_vars and '-arch' in _config_vars[cv]:
|
||||||
|
flags = _config_vars[cv]
|
||||||
|
flags = re.sub(r'-arch\s+\w+\s', ' ', flags)
|
||||||
|
flags = flags + ' ' + arch
|
||||||
|
_save_modified_value(_config_vars, cv, flags)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def _check_for_unavailable_sdk(_config_vars):
|
||||||
|
"""Remove references to any SDKs not available"""
|
||||||
|
# If we're on OSX 10.5 or later and the user tries to
|
||||||
|
# compile an extension using an SDK that is not present
|
||||||
|
# on the current machine it is better to not use an SDK
|
||||||
|
# than to fail. This is particularly important with
|
||||||
|
# the standalone Command Line Tools alternative to a
|
||||||
|
# full-blown Xcode install since the CLT packages do not
|
||||||
|
# provide SDKs. If the SDK is not present, it is assumed
|
||||||
|
# that the header files and dev libs have been installed
|
||||||
|
# to /usr and /System/Library by either a standalone CLT
|
||||||
|
# package or the CLT component within Xcode.
|
||||||
|
cflags = _config_vars.get('CFLAGS', '')
|
||||||
|
m = re.search(r'-isysroot\s+(\S+)', cflags)
|
||||||
|
if m is not None:
|
||||||
|
sdk = m.group(1)
|
||||||
|
if not os.path.exists(sdk):
|
||||||
|
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||||
|
# Do not alter a config var explicitly overridden by env var
|
||||||
|
if cv in _config_vars and cv not in os.environ:
|
||||||
|
flags = _config_vars[cv]
|
||||||
|
flags = re.sub(r'-isysroot\s+\S+(?:\s|$)', ' ', flags)
|
||||||
|
_save_modified_value(_config_vars, cv, flags)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def compiler_fixup(compiler_so, cc_args):
|
||||||
|
"""
|
||||||
|
This function will strip '-isysroot PATH' and '-arch ARCH' from the
|
||||||
|
compile flags if the user has specified one them in extra_compile_flags.
|
||||||
|
|
||||||
|
This is needed because '-arch ARCH' adds another architecture to the
|
||||||
|
build, without a way to remove an architecture. Furthermore GCC will
|
||||||
|
barf if multiple '-isysroot' arguments are present.
|
||||||
|
"""
|
||||||
|
stripArch = stripSysroot = False
|
||||||
|
|
||||||
|
compiler_so = list(compiler_so)
|
||||||
|
|
||||||
|
if not _supports_universal_builds():
|
||||||
|
# OSX before 10.4.0, these don't support -arch and -isysroot at
|
||||||
|
# all.
|
||||||
|
stripArch = stripSysroot = True
|
||||||
|
else:
|
||||||
|
stripArch = '-arch' in cc_args
|
||||||
|
stripSysroot = '-isysroot' in cc_args
|
||||||
|
|
||||||
|
if stripArch or 'ARCHFLAGS' in os.environ:
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
index = compiler_so.index('-arch')
|
||||||
|
# Strip this argument and the next one:
|
||||||
|
del compiler_so[index:index+2]
|
||||||
|
except ValueError:
|
||||||
|
break
|
||||||
|
|
||||||
|
if 'ARCHFLAGS' in os.environ and not stripArch:
|
||||||
|
# User specified different -arch flags in the environ,
|
||||||
|
# see also distutils.sysconfig
|
||||||
|
compiler_so = compiler_so + os.environ['ARCHFLAGS'].split()
|
||||||
|
|
||||||
|
if stripSysroot:
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
index = compiler_so.index('-isysroot')
|
||||||
|
# Strip this argument and the next one:
|
||||||
|
del compiler_so[index:index+2]
|
||||||
|
except ValueError:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Check if the SDK that is used during compilation actually exists,
|
||||||
|
# the universal build requires the usage of a universal SDK and not all
|
||||||
|
# users have that installed by default.
|
||||||
|
sysroot = None
|
||||||
|
if '-isysroot' in cc_args:
|
||||||
|
idx = cc_args.index('-isysroot')
|
||||||
|
sysroot = cc_args[idx+1]
|
||||||
|
elif '-isysroot' in compiler_so:
|
||||||
|
idx = compiler_so.index('-isysroot')
|
||||||
|
sysroot = compiler_so[idx+1]
|
||||||
|
|
||||||
|
if sysroot and not os.path.isdir(sysroot):
|
||||||
|
from distutils import log
|
||||||
|
log.warn("Compiling with an SDK that doesn't seem to exist: %s",
|
||||||
|
sysroot)
|
||||||
|
log.warn("Please check your Xcode installation")
|
||||||
|
|
||||||
|
return compiler_so
|
||||||
|
|
||||||
|
|
||||||
|
def customize_config_vars(_config_vars):
|
||||||
|
"""Customize Python build configuration variables.
|
||||||
|
|
||||||
|
Called internally from sysconfig with a mutable mapping
|
||||||
|
containing name/value pairs parsed from the configured
|
||||||
|
makefile used to build this interpreter. Returns
|
||||||
|
the mapping updated as needed to reflect the environment
|
||||||
|
in which the interpreter is running; in the case of
|
||||||
|
a Python from a binary installer, the installed
|
||||||
|
environment may be very different from the build
|
||||||
|
environment, i.e. different OS levels, different
|
||||||
|
built tools, different available CPU architectures.
|
||||||
|
|
||||||
|
This customization is performed whenever
|
||||||
|
distutils.sysconfig.get_config_vars() is first
|
||||||
|
called. It may be used in environments where no
|
||||||
|
compilers are present, i.e. when installing pure
|
||||||
|
Python dists. Customization of compiler paths
|
||||||
|
and detection of unavailable archs is deferred
|
||||||
|
until the first extension module build is
|
||||||
|
requested (in distutils.sysconfig.customize_compiler).
|
||||||
|
|
||||||
|
Currently called from distutils.sysconfig
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not _supports_universal_builds():
|
||||||
|
# On Mac OS X before 10.4, check if -arch and -isysroot
|
||||||
|
# are in CFLAGS or LDFLAGS and remove them if they are.
|
||||||
|
# This is needed when building extensions on a 10.3 system
|
||||||
|
# using a universal build of python.
|
||||||
|
_remove_universal_flags(_config_vars)
|
||||||
|
|
||||||
|
# Allow user to override all archs with ARCHFLAGS env var
|
||||||
|
_override_all_archs(_config_vars)
|
||||||
|
|
||||||
|
# Remove references to sdks that are not found
|
||||||
|
_check_for_unavailable_sdk(_config_vars)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def customize_compiler(_config_vars):
|
||||||
|
"""Customize compiler path and configuration variables.
|
||||||
|
|
||||||
|
This customization is performed when the first
|
||||||
|
extension module build is requested
|
||||||
|
in distutils.sysconfig.customize_compiler).
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Find a compiler to use for extension module builds
|
||||||
|
_find_appropriate_compiler(_config_vars)
|
||||||
|
|
||||||
|
# Remove ppc arch flags if not supported here
|
||||||
|
_remove_unsupported_archs(_config_vars)
|
||||||
|
|
||||||
|
# Allow user to override all archs with ARCHFLAGS env var
|
||||||
|
_override_all_archs(_config_vars)
|
||||||
|
|
||||||
|
return _config_vars
|
||||||
|
|
||||||
|
|
||||||
|
def get_platform_osx(_config_vars, osname, release, machine):
|
||||||
|
"""Filter values for get_platform()"""
|
||||||
|
# called from get_platform() in sysconfig and distutils.util
|
||||||
|
#
|
||||||
|
# For our purposes, we'll assume that the system version from
|
||||||
|
# distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set
|
||||||
|
# to. This makes the compatibility story a bit more sane because the
|
||||||
|
# machine is going to compile and link as if it were
|
||||||
|
# MACOSX_DEPLOYMENT_TARGET.
|
||||||
|
|
||||||
|
macver = _config_vars.get('MACOSX_DEPLOYMENT_TARGET', '')
|
||||||
|
macrelease = _get_system_version() or macver
|
||||||
|
macver = macver or macrelease
|
||||||
|
|
||||||
|
if macver:
|
||||||
|
release = macver
|
||||||
|
osname = "macosx"
|
||||||
|
|
||||||
|
# Use the original CFLAGS value, if available, so that we
|
||||||
|
# return the same machine type for the platform string.
|
||||||
|
# Otherwise, distutils may consider this a cross-compiling
|
||||||
|
# case and disallow installs.
|
||||||
|
cflags = _config_vars.get(_INITPRE+'CFLAGS',
|
||||||
|
_config_vars.get('CFLAGS', ''))
|
||||||
|
if macrelease:
|
||||||
|
try:
|
||||||
|
macrelease = tuple(int(i) for i in macrelease.split('.')[0:2])
|
||||||
|
except ValueError:
|
||||||
|
macrelease = (10, 0)
|
||||||
|
else:
|
||||||
|
# assume no universal support
|
||||||
|
macrelease = (10, 0)
|
||||||
|
|
||||||
|
if (macrelease >= (10, 4)) and '-arch' in cflags.strip():
|
||||||
|
# The universal build will build fat binaries, but not on
|
||||||
|
# systems before 10.4
|
||||||
|
|
||||||
|
machine = 'fat'
|
||||||
|
|
||||||
|
archs = re.findall(r'-arch\s+(\S+)', cflags)
|
||||||
|
archs = tuple(sorted(set(archs)))
|
||||||
|
|
||||||
|
if len(archs) == 1:
|
||||||
|
machine = archs[0]
|
||||||
|
elif archs == ('i386', 'ppc'):
|
||||||
|
machine = 'fat'
|
||||||
|
elif archs == ('i386', 'x86_64'):
|
||||||
|
machine = 'intel'
|
||||||
|
elif archs == ('i386', 'ppc', 'x86_64'):
|
||||||
|
machine = 'fat3'
|
||||||
|
elif archs == ('ppc64', 'x86_64'):
|
||||||
|
machine = 'fat64'
|
||||||
|
elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'):
|
||||||
|
machine = 'universal'
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
"Don't know machine value for archs=%r" % (archs,))
|
||||||
|
|
||||||
|
elif machine == 'i386':
|
||||||
|
# On OSX the machine type returned by uname is always the
|
||||||
|
# 32-bit variant, even if the executable architecture is
|
||||||
|
# the 64-bit variant
|
||||||
|
if sys.maxsize >= 2**32:
|
||||||
|
machine = 'x86_64'
|
||||||
|
|
||||||
|
elif machine in ('PowerPC', 'Power_Macintosh'):
|
||||||
|
# Pick a sane name for the PPC architecture.
|
||||||
|
# See 'i386' case
|
||||||
|
if sys.maxsize >= 2**32:
|
||||||
|
machine = 'ppc64'
|
||||||
|
else:
|
||||||
|
machine = 'ppc'
|
||||||
|
|
||||||
|
return (osname, release, machine)
|
147
Lib/_py_abc.py
Normal file
147
Lib/_py_abc.py
Normal file
|
@ -0,0 +1,147 @@
|
||||||
|
from _weakrefset import WeakSet
|
||||||
|
|
||||||
|
|
||||||
|
def get_cache_token():
|
||||||
|
"""Returns the current ABC cache token.
|
||||||
|
|
||||||
|
The token is an opaque object (supporting equality testing) identifying the
|
||||||
|
current version of the ABC cache for virtual subclasses. The token changes
|
||||||
|
with every call to ``register()`` on any ABC.
|
||||||
|
"""
|
||||||
|
return ABCMeta._abc_invalidation_counter
|
||||||
|
|
||||||
|
|
||||||
|
class ABCMeta(type):
|
||||||
|
"""Metaclass for defining Abstract Base Classes (ABCs).
|
||||||
|
|
||||||
|
Use this metaclass to create an ABC. An ABC can be subclassed
|
||||||
|
directly, and then acts as a mix-in class. You can also register
|
||||||
|
unrelated concrete classes (even built-in classes) and unrelated
|
||||||
|
ABCs as 'virtual subclasses' -- these and their descendants will
|
||||||
|
be considered subclasses of the registering ABC by the built-in
|
||||||
|
issubclass() function, but the registering ABC won't show up in
|
||||||
|
their MRO (Method Resolution Order) nor will method
|
||||||
|
implementations defined by the registering ABC be callable (not
|
||||||
|
even via super()).
|
||||||
|
"""
|
||||||
|
|
||||||
|
# A global counter that is incremented each time a class is
|
||||||
|
# registered as a virtual subclass of anything. It forces the
|
||||||
|
# negative cache to be cleared before its next use.
|
||||||
|
# Note: this counter is private. Use `abc.get_cache_token()` for
|
||||||
|
# external code.
|
||||||
|
_abc_invalidation_counter = 0
|
||||||
|
|
||||||
|
def __new__(mcls, name, bases, namespace, **kwargs):
|
||||||
|
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
|
||||||
|
# Compute set of abstract method names
|
||||||
|
abstracts = {name
|
||||||
|
for name, value in namespace.items()
|
||||||
|
if getattr(value, "__isabstractmethod__", False)}
|
||||||
|
for base in bases:
|
||||||
|
for name in getattr(base, "__abstractmethods__", set()):
|
||||||
|
value = getattr(cls, name, None)
|
||||||
|
if getattr(value, "__isabstractmethod__", False):
|
||||||
|
abstracts.add(name)
|
||||||
|
cls.__abstractmethods__ = frozenset(abstracts)
|
||||||
|
# Set up inheritance registry
|
||||||
|
cls._abc_registry = WeakSet()
|
||||||
|
cls._abc_cache = WeakSet()
|
||||||
|
cls._abc_negative_cache = WeakSet()
|
||||||
|
cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
|
||||||
|
return cls
|
||||||
|
|
||||||
|
def register(cls, subclass):
|
||||||
|
"""Register a virtual subclass of an ABC.
|
||||||
|
|
||||||
|
Returns the subclass, to allow usage as a class decorator.
|
||||||
|
"""
|
||||||
|
if not isinstance(subclass, type):
|
||||||
|
raise TypeError("Can only register classes")
|
||||||
|
if issubclass(subclass, cls):
|
||||||
|
return subclass # Already a subclass
|
||||||
|
# Subtle: test for cycles *after* testing for "already a subclass";
|
||||||
|
# this means we allow X.register(X) and interpret it as a no-op.
|
||||||
|
if issubclass(cls, subclass):
|
||||||
|
# This would create a cycle, which is bad for the algorithm below
|
||||||
|
raise RuntimeError("Refusing to create an inheritance cycle")
|
||||||
|
cls._abc_registry.add(subclass)
|
||||||
|
ABCMeta._abc_invalidation_counter += 1 # Invalidate negative cache
|
||||||
|
return subclass
|
||||||
|
|
||||||
|
def _dump_registry(cls, file=None):
|
||||||
|
"""Debug helper to print the ABC registry."""
|
||||||
|
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
|
||||||
|
print(f"Inv. counter: {get_cache_token()}", file=file)
|
||||||
|
for name in cls.__dict__:
|
||||||
|
if name.startswith("_abc_"):
|
||||||
|
value = getattr(cls, name)
|
||||||
|
if isinstance(value, WeakSet):
|
||||||
|
value = set(value)
|
||||||
|
print(f"{name}: {value!r}", file=file)
|
||||||
|
|
||||||
|
def _abc_registry_clear(cls):
|
||||||
|
"""Clear the registry (for debugging or testing)."""
|
||||||
|
cls._abc_registry.clear()
|
||||||
|
|
||||||
|
def _abc_caches_clear(cls):
|
||||||
|
"""Clear the caches (for debugging or testing)."""
|
||||||
|
cls._abc_cache.clear()
|
||||||
|
cls._abc_negative_cache.clear()
|
||||||
|
|
||||||
|
def __instancecheck__(cls, instance):
|
||||||
|
"""Override for isinstance(instance, cls)."""
|
||||||
|
# Inline the cache checking
|
||||||
|
subclass = instance.__class__
|
||||||
|
if subclass in cls._abc_cache:
|
||||||
|
return True
|
||||||
|
subtype = type(instance)
|
||||||
|
if subtype is subclass:
|
||||||
|
if (cls._abc_negative_cache_version ==
|
||||||
|
ABCMeta._abc_invalidation_counter and
|
||||||
|
subclass in cls._abc_negative_cache):
|
||||||
|
return False
|
||||||
|
# Fall back to the subclass check.
|
||||||
|
return cls.__subclasscheck__(subclass)
|
||||||
|
return any(cls.__subclasscheck__(c) for c in (subclass, subtype))
|
||||||
|
|
||||||
|
def __subclasscheck__(cls, subclass):
|
||||||
|
"""Override for issubclass(subclass, cls)."""
|
||||||
|
if not isinstance(subclass, type):
|
||||||
|
raise TypeError('issubclass() arg 1 must be a class')
|
||||||
|
# Check cache
|
||||||
|
if subclass in cls._abc_cache:
|
||||||
|
return True
|
||||||
|
# Check negative cache; may have to invalidate
|
||||||
|
if cls._abc_negative_cache_version < ABCMeta._abc_invalidation_counter:
|
||||||
|
# Invalidate the negative cache
|
||||||
|
cls._abc_negative_cache = WeakSet()
|
||||||
|
cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
|
||||||
|
elif subclass in cls._abc_negative_cache:
|
||||||
|
return False
|
||||||
|
# Check the subclass hook
|
||||||
|
ok = cls.__subclasshook__(subclass)
|
||||||
|
if ok is not NotImplemented:
|
||||||
|
assert isinstance(ok, bool)
|
||||||
|
if ok:
|
||||||
|
cls._abc_cache.add(subclass)
|
||||||
|
else:
|
||||||
|
cls._abc_negative_cache.add(subclass)
|
||||||
|
return ok
|
||||||
|
# Check if it's a direct subclass
|
||||||
|
if cls in getattr(subclass, '__mro__', ()):
|
||||||
|
cls._abc_cache.add(subclass)
|
||||||
|
return True
|
||||||
|
# Check if it's a subclass of a registered class (recursive)
|
||||||
|
for rcls in cls._abc_registry:
|
||||||
|
if issubclass(subclass, rcls):
|
||||||
|
cls._abc_cache.add(subclass)
|
||||||
|
return True
|
||||||
|
# Check if it's a subclass of a subclass (recursive)
|
||||||
|
for scls in cls.__subclasses__():
|
||||||
|
if issubclass(subclass, scls):
|
||||||
|
cls._abc_cache.add(subclass)
|
||||||
|
return True
|
||||||
|
# No dice; update negative cache
|
||||||
|
cls._abc_negative_cache.add(subclass)
|
||||||
|
return False
|
6408
Lib/_pydecimal.py
Normal file
6408
Lib/_pydecimal.py
Normal file
File diff suppressed because it is too large
Load diff
2625
Lib/_pyio.py
Normal file
2625
Lib/_pyio.py
Normal file
File diff suppressed because it is too large
Load diff
103
Lib/_sitebuiltins.py
Normal file
103
Lib/_sitebuiltins.py
Normal file
|
@ -0,0 +1,103 @@
|
||||||
|
"""
|
||||||
|
The objects used by the site module to add custom builtins.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Those objects are almost immortal and they keep a reference to their module
|
||||||
|
# globals. Defining them in the site module would keep too many references
|
||||||
|
# alive.
|
||||||
|
# Note this means this module should also avoid keep things alive in its
|
||||||
|
# globals.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
class Quitter(object):
|
||||||
|
def __init__(self, name, eof):
|
||||||
|
self.name = name
|
||||||
|
self.eof = eof
|
||||||
|
def __repr__(self):
|
||||||
|
return 'Use %s() or %s to exit' % (self.name, self.eof)
|
||||||
|
def __call__(self, code=None):
|
||||||
|
# Shells like IDLE catch the SystemExit, but listen when their
|
||||||
|
# stdin wrapper is closed.
|
||||||
|
try:
|
||||||
|
sys.stdin.close()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise SystemExit(code)
|
||||||
|
|
||||||
|
|
||||||
|
class _Printer(object):
|
||||||
|
"""interactive prompt objects for printing the license text, a list of
|
||||||
|
contributors and the copyright notice."""
|
||||||
|
|
||||||
|
MAXLINES = 23
|
||||||
|
|
||||||
|
def __init__(self, name, data, files=(), dirs=()):
|
||||||
|
import os
|
||||||
|
self.__name = name
|
||||||
|
self.__data = data
|
||||||
|
self.__lines = None
|
||||||
|
self.__filenames = [os.path.join(dir, filename)
|
||||||
|
for dir in dirs
|
||||||
|
for filename in files]
|
||||||
|
|
||||||
|
def __setup(self):
|
||||||
|
if self.__lines:
|
||||||
|
return
|
||||||
|
data = None
|
||||||
|
for filename in self.__filenames:
|
||||||
|
try:
|
||||||
|
with open(filename, "r") as fp:
|
||||||
|
data = fp.read()
|
||||||
|
break
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
if not data:
|
||||||
|
data = self.__data
|
||||||
|
self.__lines = data.split('\n')
|
||||||
|
self.__linecnt = len(self.__lines)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
self.__setup()
|
||||||
|
if len(self.__lines) <= self.MAXLINES:
|
||||||
|
return "\n".join(self.__lines)
|
||||||
|
else:
|
||||||
|
return "Type %s() to see the full %s text" % ((self.__name,)*2)
|
||||||
|
|
||||||
|
def __call__(self):
|
||||||
|
self.__setup()
|
||||||
|
prompt = 'Hit Return for more, or q (and Return) to quit: '
|
||||||
|
lineno = 0
|
||||||
|
while 1:
|
||||||
|
try:
|
||||||
|
for i in range(lineno, lineno + self.MAXLINES):
|
||||||
|
print(self.__lines[i])
|
||||||
|
except IndexError:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
lineno += self.MAXLINES
|
||||||
|
key = None
|
||||||
|
while key is None:
|
||||||
|
key = input(prompt)
|
||||||
|
if key not in ('', 'q'):
|
||||||
|
key = None
|
||||||
|
if key == 'q':
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
class _Helper(object):
|
||||||
|
"""Define the builtin 'help'.
|
||||||
|
|
||||||
|
This is a wrapper around pydoc.help that provides a helpful message
|
||||||
|
when 'help' is typed at the Python interactive prompt.
|
||||||
|
|
||||||
|
Calling help() at the Python prompt starts an interactive help session.
|
||||||
|
Calling help(thing) prints help for the python object 'thing'.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "Type help() for interactive help, " \
|
||||||
|
"or help(object) for help about object."
|
||||||
|
def __call__(self, *args, **kwds):
|
||||||
|
import pydoc
|
||||||
|
return pydoc.help(*args, **kwds)
|
588
Lib/_strptime.py
Normal file
588
Lib/_strptime.py
Normal file
|
@ -0,0 +1,588 @@
|
||||||
|
"""Strptime-related classes and functions.
|
||||||
|
|
||||||
|
CLASSES:
|
||||||
|
LocaleTime -- Discovers and stores locale-specific time information
|
||||||
|
TimeRE -- Creates regexes for pattern matching a string of text containing
|
||||||
|
time information
|
||||||
|
|
||||||
|
FUNCTIONS:
|
||||||
|
_getlang -- Figure out what language is being used for the locale
|
||||||
|
strptime -- Calculates the time struct represented by the passed-in string
|
||||||
|
|
||||||
|
"""
|
||||||
|
import time
|
||||||
|
import locale
|
||||||
|
import calendar
|
||||||
|
from re import compile as re_compile
|
||||||
|
from re import IGNORECASE
|
||||||
|
from re import escape as re_escape
|
||||||
|
from datetime import (date as datetime_date,
|
||||||
|
timedelta as datetime_timedelta,
|
||||||
|
timezone as datetime_timezone)
|
||||||
|
from _thread import allocate_lock as _thread_allocate_lock
|
||||||
|
|
||||||
|
__all__ = []
|
||||||
|
|
||||||
|
def _getlang():
|
||||||
|
# Figure out what the current language is set to.
|
||||||
|
return locale.getlocale(locale.LC_TIME)
|
||||||
|
|
||||||
|
class LocaleTime(object):
|
||||||
|
"""Stores and handles locale-specific information related to time.
|
||||||
|
|
||||||
|
ATTRIBUTES:
|
||||||
|
f_weekday -- full weekday names (7-item list)
|
||||||
|
a_weekday -- abbreviated weekday names (7-item list)
|
||||||
|
f_month -- full month names (13-item list; dummy value in [0], which
|
||||||
|
is added by code)
|
||||||
|
a_month -- abbreviated month names (13-item list, dummy value in
|
||||||
|
[0], which is added by code)
|
||||||
|
am_pm -- AM/PM representation (2-item list)
|
||||||
|
LC_date_time -- format string for date/time representation (string)
|
||||||
|
LC_date -- format string for date representation (string)
|
||||||
|
LC_time -- format string for time representation (string)
|
||||||
|
timezone -- daylight- and non-daylight-savings timezone representation
|
||||||
|
(2-item list of sets)
|
||||||
|
lang -- Language used by instance (2-item tuple)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Set all attributes.
|
||||||
|
|
||||||
|
Order of methods called matters for dependency reasons.
|
||||||
|
|
||||||
|
The locale language is set at the offset and then checked again before
|
||||||
|
exiting. This is to make sure that the attributes were not set with a
|
||||||
|
mix of information from more than one locale. This would most likely
|
||||||
|
happen when using threads where one thread calls a locale-dependent
|
||||||
|
function while another thread changes the locale while the function in
|
||||||
|
the other thread is still running. Proper coding would call for
|
||||||
|
locks to prevent changing the locale while locale-dependent code is
|
||||||
|
running. The check here is done in case someone does not think about
|
||||||
|
doing this.
|
||||||
|
|
||||||
|
Only other possible issue is if someone changed the timezone and did
|
||||||
|
not call tz.tzset . That is an issue for the programmer, though,
|
||||||
|
since changing the timezone is worthless without that call.
|
||||||
|
|
||||||
|
"""
|
||||||
|
self.lang = _getlang()
|
||||||
|
self.__calc_weekday()
|
||||||
|
self.__calc_month()
|
||||||
|
self.__calc_am_pm()
|
||||||
|
self.__calc_timezone()
|
||||||
|
self.__calc_date_time()
|
||||||
|
if _getlang() != self.lang:
|
||||||
|
raise ValueError("locale changed during initialization")
|
||||||
|
if time.tzname != self.tzname or time.daylight != self.daylight:
|
||||||
|
raise ValueError("timezone changed during initialization")
|
||||||
|
|
||||||
|
def __pad(self, seq, front):
|
||||||
|
# Add '' to seq to either the front (is True), else the back.
|
||||||
|
seq = list(seq)
|
||||||
|
if front:
|
||||||
|
seq.insert(0, '')
|
||||||
|
else:
|
||||||
|
seq.append('')
|
||||||
|
return seq
|
||||||
|
|
||||||
|
def __calc_weekday(self):
|
||||||
|
# Set self.a_weekday and self.f_weekday using the calendar
|
||||||
|
# module.
|
||||||
|
a_weekday = [calendar.day_abbr[i].lower() for i in range(7)]
|
||||||
|
f_weekday = [calendar.day_name[i].lower() for i in range(7)]
|
||||||
|
self.a_weekday = a_weekday
|
||||||
|
self.f_weekday = f_weekday
|
||||||
|
|
||||||
|
def __calc_month(self):
|
||||||
|
# Set self.f_month and self.a_month using the calendar module.
|
||||||
|
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
|
||||||
|
f_month = [calendar.month_name[i].lower() for i in range(13)]
|
||||||
|
self.a_month = a_month
|
||||||
|
self.f_month = f_month
|
||||||
|
|
||||||
|
def __calc_am_pm(self):
|
||||||
|
# Set self.am_pm by using time.strftime().
|
||||||
|
|
||||||
|
# The magic date (1999,3,17,hour,44,55,2,76,0) is not really that
|
||||||
|
# magical; just happened to have used it everywhere else where a
|
||||||
|
# static date was needed.
|
||||||
|
am_pm = []
|
||||||
|
for hour in (1, 22):
|
||||||
|
time_tuple = time.struct_time((1999,3,17,hour,44,55,2,76,0))
|
||||||
|
am_pm.append(time.strftime("%p", time_tuple).lower())
|
||||||
|
self.am_pm = am_pm
|
||||||
|
|
||||||
|
def __calc_date_time(self):
|
||||||
|
# Set self.date_time, self.date, & self.time by using
|
||||||
|
# time.strftime().
|
||||||
|
|
||||||
|
# Use (1999,3,17,22,44,55,2,76,0) for magic date because the amount of
|
||||||
|
# overloaded numbers is minimized. The order in which searches for
|
||||||
|
# values within the format string is very important; it eliminates
|
||||||
|
# possible ambiguity for what something represents.
|
||||||
|
time_tuple = time.struct_time((1999,3,17,22,44,55,2,76,0))
|
||||||
|
date_time = [None, None, None]
|
||||||
|
date_time[0] = time.strftime("%c", time_tuple).lower()
|
||||||
|
date_time[1] = time.strftime("%x", time_tuple).lower()
|
||||||
|
date_time[2] = time.strftime("%X", time_tuple).lower()
|
||||||
|
replacement_pairs = [('%', '%%'), (self.f_weekday[2], '%A'),
|
||||||
|
(self.f_month[3], '%B'), (self.a_weekday[2], '%a'),
|
||||||
|
(self.a_month[3], '%b'), (self.am_pm[1], '%p'),
|
||||||
|
('1999', '%Y'), ('99', '%y'), ('22', '%H'),
|
||||||
|
('44', '%M'), ('55', '%S'), ('76', '%j'),
|
||||||
|
('17', '%d'), ('03', '%m'), ('3', '%m'),
|
||||||
|
# '3' needed for when no leading zero.
|
||||||
|
('2', '%w'), ('10', '%I')]
|
||||||
|
replacement_pairs.extend([(tz, "%Z") for tz_values in self.timezone
|
||||||
|
for tz in tz_values])
|
||||||
|
for offset,directive in ((0,'%c'), (1,'%x'), (2,'%X')):
|
||||||
|
current_format = date_time[offset]
|
||||||
|
for old, new in replacement_pairs:
|
||||||
|
# Must deal with possible lack of locale info
|
||||||
|
# manifesting itself as the empty string (e.g., Swedish's
|
||||||
|
# lack of AM/PM info) or a platform returning a tuple of empty
|
||||||
|
# strings (e.g., MacOS 9 having timezone as ('','')).
|
||||||
|
if old:
|
||||||
|
current_format = current_format.replace(old, new)
|
||||||
|
# If %W is used, then Sunday, 2005-01-03 will fall on week 0 since
|
||||||
|
# 2005-01-03 occurs before the first Monday of the year. Otherwise
|
||||||
|
# %U is used.
|
||||||
|
time_tuple = time.struct_time((1999,1,3,1,1,1,6,3,0))
|
||||||
|
if '00' in time.strftime(directive, time_tuple):
|
||||||
|
U_W = '%W'
|
||||||
|
else:
|
||||||
|
U_W = '%U'
|
||||||
|
date_time[offset] = current_format.replace('11', U_W)
|
||||||
|
self.LC_date_time = date_time[0]
|
||||||
|
self.LC_date = date_time[1]
|
||||||
|
self.LC_time = date_time[2]
|
||||||
|
|
||||||
|
def __calc_timezone(self):
|
||||||
|
# Set self.timezone by using time.tzname.
|
||||||
|
# Do not worry about possibility of time.tzname[0] == time.tzname[1]
|
||||||
|
# and time.daylight; handle that in strptime.
|
||||||
|
try:
|
||||||
|
time.tzset()
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
self.tzname = time.tzname
|
||||||
|
self.daylight = time.daylight
|
||||||
|
no_saving = frozenset({"utc", "gmt", self.tzname[0].lower()})
|
||||||
|
if self.daylight:
|
||||||
|
has_saving = frozenset({self.tzname[1].lower()})
|
||||||
|
else:
|
||||||
|
has_saving = frozenset()
|
||||||
|
self.timezone = (no_saving, has_saving)
|
||||||
|
|
||||||
|
|
||||||
|
class TimeRE(dict):
|
||||||
|
"""Handle conversion from format directives to regexes."""
|
||||||
|
|
||||||
|
def __init__(self, locale_time=None):
|
||||||
|
"""Create keys/values.
|
||||||
|
|
||||||
|
Order of execution is important for dependency reasons.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if locale_time:
|
||||||
|
self.locale_time = locale_time
|
||||||
|
else:
|
||||||
|
self.locale_time = LocaleTime()
|
||||||
|
base = super()
|
||||||
|
base.__init__({
|
||||||
|
# The " \d" part of the regex is to make %c from ANSI C work
|
||||||
|
'd': r"(?P<d>3[0-1]|[1-2]\d|0[1-9]|[1-9]| [1-9])",
|
||||||
|
'f': r"(?P<f>[0-9]{1,6})",
|
||||||
|
'H': r"(?P<H>2[0-3]|[0-1]\d|\d)",
|
||||||
|
'I': r"(?P<I>1[0-2]|0[1-9]|[1-9])",
|
||||||
|
'G': r"(?P<G>\d\d\d\d)",
|
||||||
|
'j': r"(?P<j>36[0-6]|3[0-5]\d|[1-2]\d\d|0[1-9]\d|00[1-9]|[1-9]\d|0[1-9]|[1-9])",
|
||||||
|
'm': r"(?P<m>1[0-2]|0[1-9]|[1-9])",
|
||||||
|
'M': r"(?P<M>[0-5]\d|\d)",
|
||||||
|
'S': r"(?P<S>6[0-1]|[0-5]\d|\d)",
|
||||||
|
'U': r"(?P<U>5[0-3]|[0-4]\d|\d)",
|
||||||
|
'w': r"(?P<w>[0-6])",
|
||||||
|
'u': r"(?P<u>[1-7])",
|
||||||
|
'V': r"(?P<V>5[0-3]|0[1-9]|[1-4]\d|\d)",
|
||||||
|
# W is set below by using 'U'
|
||||||
|
'y': r"(?P<y>\d\d)",
|
||||||
|
#XXX: Does 'Y' need to worry about having less or more than
|
||||||
|
# 4 digits?
|
||||||
|
'Y': r"(?P<Y>\d\d\d\d)",
|
||||||
|
'z': r"(?P<z>[+-]\d\d:?[0-5]\d(:?[0-5]\d(\.\d{1,6})?)?|Z)",
|
||||||
|
'A': self.__seqToRE(self.locale_time.f_weekday, 'A'),
|
||||||
|
'a': self.__seqToRE(self.locale_time.a_weekday, 'a'),
|
||||||
|
'B': self.__seqToRE(self.locale_time.f_month[1:], 'B'),
|
||||||
|
'b': self.__seqToRE(self.locale_time.a_month[1:], 'b'),
|
||||||
|
'p': self.__seqToRE(self.locale_time.am_pm, 'p'),
|
||||||
|
'Z': self.__seqToRE((tz for tz_names in self.locale_time.timezone
|
||||||
|
for tz in tz_names),
|
||||||
|
'Z'),
|
||||||
|
'%': '%'})
|
||||||
|
base.__setitem__('W', base.__getitem__('U').replace('U', 'W'))
|
||||||
|
base.__setitem__('c', self.pattern(self.locale_time.LC_date_time))
|
||||||
|
base.__setitem__('x', self.pattern(self.locale_time.LC_date))
|
||||||
|
base.__setitem__('X', self.pattern(self.locale_time.LC_time))
|
||||||
|
|
||||||
|
def __seqToRE(self, to_convert, directive):
|
||||||
|
"""Convert a list to a regex string for matching a directive.
|
||||||
|
|
||||||
|
Want possible matching values to be from longest to shortest. This
|
||||||
|
prevents the possibility of a match occurring for a value that also
|
||||||
|
a substring of a larger value that should have matched (e.g., 'abc'
|
||||||
|
matching when 'abcdef' should have been the match).
|
||||||
|
|
||||||
|
"""
|
||||||
|
to_convert = sorted(to_convert, key=len, reverse=True)
|
||||||
|
for value in to_convert:
|
||||||
|
if value != '':
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
return ''
|
||||||
|
regex = '|'.join(re_escape(stuff) for stuff in to_convert)
|
||||||
|
regex = '(?P<%s>%s' % (directive, regex)
|
||||||
|
return '%s)' % regex
|
||||||
|
|
||||||
|
def pattern(self, format):
|
||||||
|
"""Return regex pattern for the format string.
|
||||||
|
|
||||||
|
Need to make sure that any characters that might be interpreted as
|
||||||
|
regex syntax are escaped.
|
||||||
|
|
||||||
|
"""
|
||||||
|
processed_format = ''
|
||||||
|
# The sub() call escapes all characters that might be misconstrued
|
||||||
|
# as regex syntax. Cannot use re.escape since we have to deal with
|
||||||
|
# format directives (%m, etc.).
|
||||||
|
regex_chars = re_compile(r"([\\.^$*+?\(\){}\[\]|])")
|
||||||
|
format = regex_chars.sub(r"\\\1", format)
|
||||||
|
whitespace_replacement = re_compile(r'\s+')
|
||||||
|
format = whitespace_replacement.sub(r'\\s+', format)
|
||||||
|
while '%' in format:
|
||||||
|
directive_index = format.index('%')+1
|
||||||
|
processed_format = "%s%s%s" % (processed_format,
|
||||||
|
format[:directive_index-1],
|
||||||
|
self[format[directive_index]])
|
||||||
|
format = format[directive_index+1:]
|
||||||
|
return "%s%s" % (processed_format, format)
|
||||||
|
|
||||||
|
def compile(self, format):
|
||||||
|
"""Return a compiled re object for the format string."""
|
||||||
|
return re_compile(self.pattern(format), IGNORECASE)
|
||||||
|
|
||||||
|
_cache_lock = _thread_allocate_lock()
|
||||||
|
# DO NOT modify _TimeRE_cache or _regex_cache without acquiring the cache lock
|
||||||
|
# first!
|
||||||
|
_TimeRE_cache = TimeRE()
|
||||||
|
_CACHE_MAX_SIZE = 5 # Max number of regexes stored in _regex_cache
|
||||||
|
_regex_cache = {}
|
||||||
|
|
||||||
|
def _calc_julian_from_U_or_W(year, week_of_year, day_of_week, week_starts_Mon):
|
||||||
|
"""Calculate the Julian day based on the year, week of the year, and day of
|
||||||
|
the week, with week_start_day representing whether the week of the year
|
||||||
|
assumes the week starts on Sunday or Monday (6 or 0)."""
|
||||||
|
first_weekday = datetime_date(year, 1, 1).weekday()
|
||||||
|
# If we are dealing with the %U directive (week starts on Sunday), it's
|
||||||
|
# easier to just shift the view to Sunday being the first day of the
|
||||||
|
# week.
|
||||||
|
if not week_starts_Mon:
|
||||||
|
first_weekday = (first_weekday + 1) % 7
|
||||||
|
day_of_week = (day_of_week + 1) % 7
|
||||||
|
# Need to watch out for a week 0 (when the first day of the year is not
|
||||||
|
# the same as that specified by %U or %W).
|
||||||
|
week_0_length = (7 - first_weekday) % 7
|
||||||
|
if week_of_year == 0:
|
||||||
|
return 1 + day_of_week - first_weekday
|
||||||
|
else:
|
||||||
|
days_to_week = week_0_length + (7 * (week_of_year - 1))
|
||||||
|
return 1 + days_to_week + day_of_week
|
||||||
|
|
||||||
|
|
||||||
|
def _calc_julian_from_V(iso_year, iso_week, iso_weekday):
|
||||||
|
"""Calculate the Julian day based on the ISO 8601 year, week, and weekday.
|
||||||
|
ISO weeks start on Mondays, with week 01 being the week containing 4 Jan.
|
||||||
|
ISO week days range from 1 (Monday) to 7 (Sunday).
|
||||||
|
"""
|
||||||
|
correction = datetime_date(iso_year, 1, 4).isoweekday() + 3
|
||||||
|
ordinal = (iso_week * 7) + iso_weekday - correction
|
||||||
|
# ordinal may be negative or 0 now, which means the date is in the previous
|
||||||
|
# calendar year
|
||||||
|
if ordinal < 1:
|
||||||
|
ordinal += datetime_date(iso_year, 1, 1).toordinal()
|
||||||
|
iso_year -= 1
|
||||||
|
ordinal -= datetime_date(iso_year, 1, 1).toordinal()
|
||||||
|
return iso_year, ordinal
|
||||||
|
|
||||||
|
|
||||||
|
def _strptime(data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||||
|
"""Return a 2-tuple consisting of a time struct and an int containing
|
||||||
|
the number of microseconds based on the input string and the
|
||||||
|
format string."""
|
||||||
|
|
||||||
|
for index, arg in enumerate([data_string, format]):
|
||||||
|
if not isinstance(arg, str):
|
||||||
|
msg = "strptime() argument {} must be str, not {}"
|
||||||
|
raise TypeError(msg.format(index, type(arg)))
|
||||||
|
|
||||||
|
global _TimeRE_cache, _regex_cache
|
||||||
|
with _cache_lock:
|
||||||
|
locale_time = _TimeRE_cache.locale_time
|
||||||
|
if (_getlang() != locale_time.lang or
|
||||||
|
time.tzname != locale_time.tzname or
|
||||||
|
time.daylight != locale_time.daylight):
|
||||||
|
_TimeRE_cache = TimeRE()
|
||||||
|
_regex_cache.clear()
|
||||||
|
locale_time = _TimeRE_cache.locale_time
|
||||||
|
if len(_regex_cache) > _CACHE_MAX_SIZE:
|
||||||
|
_regex_cache.clear()
|
||||||
|
format_regex = _regex_cache.get(format)
|
||||||
|
if not format_regex:
|
||||||
|
try:
|
||||||
|
format_regex = _TimeRE_cache.compile(format)
|
||||||
|
# KeyError raised when a bad format is found; can be specified as
|
||||||
|
# \\, in which case it was a stray % but with a space after it
|
||||||
|
except KeyError as err:
|
||||||
|
bad_directive = err.args[0]
|
||||||
|
if bad_directive == "\\":
|
||||||
|
bad_directive = "%"
|
||||||
|
del err
|
||||||
|
raise ValueError("'%s' is a bad directive in format '%s'" %
|
||||||
|
(bad_directive, format)) from None
|
||||||
|
# IndexError only occurs when the format string is "%"
|
||||||
|
except IndexError:
|
||||||
|
raise ValueError("stray %% in format '%s'" % format) from None
|
||||||
|
_regex_cache[format] = format_regex
|
||||||
|
found = format_regex.match(data_string)
|
||||||
|
if not found:
|
||||||
|
raise ValueError("time data %r does not match format %r" %
|
||||||
|
(data_string, format))
|
||||||
|
if len(data_string) != found.end():
|
||||||
|
raise ValueError("unconverted data remains: %s" %
|
||||||
|
data_string[found.end():])
|
||||||
|
|
||||||
|
iso_year = year = None
|
||||||
|
month = day = 1
|
||||||
|
hour = minute = second = fraction = 0
|
||||||
|
tz = -1
|
||||||
|
gmtoff = None
|
||||||
|
gmtoff_fraction = 0
|
||||||
|
# Default to -1 to signify that values not known; not critical to have,
|
||||||
|
# though
|
||||||
|
iso_week = week_of_year = None
|
||||||
|
week_of_year_start = None
|
||||||
|
# weekday and julian defaulted to None so as to signal need to calculate
|
||||||
|
# values
|
||||||
|
weekday = julian = None
|
||||||
|
found_dict = found.groupdict()
|
||||||
|
for group_key in found_dict.keys():
|
||||||
|
# Directives not explicitly handled below:
|
||||||
|
# c, x, X
|
||||||
|
# handled by making out of other directives
|
||||||
|
# U, W
|
||||||
|
# worthless without day of the week
|
||||||
|
if group_key == 'y':
|
||||||
|
year = int(found_dict['y'])
|
||||||
|
# Open Group specification for strptime() states that a %y
|
||||||
|
#value in the range of [00, 68] is in the century 2000, while
|
||||||
|
#[69,99] is in the century 1900
|
||||||
|
if year <= 68:
|
||||||
|
year += 2000
|
||||||
|
else:
|
||||||
|
year += 1900
|
||||||
|
elif group_key == 'Y':
|
||||||
|
year = int(found_dict['Y'])
|
||||||
|
elif group_key == 'G':
|
||||||
|
iso_year = int(found_dict['G'])
|
||||||
|
elif group_key == 'm':
|
||||||
|
month = int(found_dict['m'])
|
||||||
|
elif group_key == 'B':
|
||||||
|
month = locale_time.f_month.index(found_dict['B'].lower())
|
||||||
|
elif group_key == 'b':
|
||||||
|
month = locale_time.a_month.index(found_dict['b'].lower())
|
||||||
|
elif group_key == 'd':
|
||||||
|
day = int(found_dict['d'])
|
||||||
|
elif group_key == 'H':
|
||||||
|
hour = int(found_dict['H'])
|
||||||
|
elif group_key == 'I':
|
||||||
|
hour = int(found_dict['I'])
|
||||||
|
ampm = found_dict.get('p', '').lower()
|
||||||
|
# If there was no AM/PM indicator, we'll treat this like AM
|
||||||
|
if ampm in ('', locale_time.am_pm[0]):
|
||||||
|
# We're in AM so the hour is correct unless we're
|
||||||
|
# looking at 12 midnight.
|
||||||
|
# 12 midnight == 12 AM == hour 0
|
||||||
|
if hour == 12:
|
||||||
|
hour = 0
|
||||||
|
elif ampm == locale_time.am_pm[1]:
|
||||||
|
# We're in PM so we need to add 12 to the hour unless
|
||||||
|
# we're looking at 12 noon.
|
||||||
|
# 12 noon == 12 PM == hour 12
|
||||||
|
if hour != 12:
|
||||||
|
hour += 12
|
||||||
|
elif group_key == 'M':
|
||||||
|
minute = int(found_dict['M'])
|
||||||
|
elif group_key == 'S':
|
||||||
|
second = int(found_dict['S'])
|
||||||
|
elif group_key == 'f':
|
||||||
|
s = found_dict['f']
|
||||||
|
# Pad to always return microseconds.
|
||||||
|
s += "0" * (6 - len(s))
|
||||||
|
fraction = int(s)
|
||||||
|
elif group_key == 'A':
|
||||||
|
weekday = locale_time.f_weekday.index(found_dict['A'].lower())
|
||||||
|
elif group_key == 'a':
|
||||||
|
weekday = locale_time.a_weekday.index(found_dict['a'].lower())
|
||||||
|
elif group_key == 'w':
|
||||||
|
weekday = int(found_dict['w'])
|
||||||
|
if weekday == 0:
|
||||||
|
weekday = 6
|
||||||
|
else:
|
||||||
|
weekday -= 1
|
||||||
|
elif group_key == 'u':
|
||||||
|
weekday = int(found_dict['u'])
|
||||||
|
weekday -= 1
|
||||||
|
elif group_key == 'j':
|
||||||
|
julian = int(found_dict['j'])
|
||||||
|
elif group_key in ('U', 'W'):
|
||||||
|
week_of_year = int(found_dict[group_key])
|
||||||
|
if group_key == 'U':
|
||||||
|
# U starts week on Sunday.
|
||||||
|
week_of_year_start = 6
|
||||||
|
else:
|
||||||
|
# W starts week on Monday.
|
||||||
|
week_of_year_start = 0
|
||||||
|
elif group_key == 'V':
|
||||||
|
iso_week = int(found_dict['V'])
|
||||||
|
elif group_key == 'z':
|
||||||
|
z = found_dict['z']
|
||||||
|
if z == 'Z':
|
||||||
|
gmtoff = 0
|
||||||
|
else:
|
||||||
|
if z[3] == ':':
|
||||||
|
z = z[:3] + z[4:]
|
||||||
|
if len(z) > 5:
|
||||||
|
if z[5] != ':':
|
||||||
|
msg = f"Inconsistent use of : in {found_dict['z']}"
|
||||||
|
raise ValueError(msg)
|
||||||
|
z = z[:5] + z[6:]
|
||||||
|
hours = int(z[1:3])
|
||||||
|
minutes = int(z[3:5])
|
||||||
|
seconds = int(z[5:7] or 0)
|
||||||
|
gmtoff = (hours * 60 * 60) + (minutes * 60) + seconds
|
||||||
|
gmtoff_remainder = z[8:]
|
||||||
|
# Pad to always return microseconds.
|
||||||
|
gmtoff_remainder_padding = "0" * (6 - len(gmtoff_remainder))
|
||||||
|
gmtoff_fraction = int(gmtoff_remainder + gmtoff_remainder_padding)
|
||||||
|
if z.startswith("-"):
|
||||||
|
gmtoff = -gmtoff
|
||||||
|
gmtoff_fraction = -gmtoff_fraction
|
||||||
|
elif group_key == 'Z':
|
||||||
|
# Since -1 is default value only need to worry about setting tz if
|
||||||
|
# it can be something other than -1.
|
||||||
|
found_zone = found_dict['Z'].lower()
|
||||||
|
for value, tz_values in enumerate(locale_time.timezone):
|
||||||
|
if found_zone in tz_values:
|
||||||
|
# Deal with bad locale setup where timezone names are the
|
||||||
|
# same and yet time.daylight is true; too ambiguous to
|
||||||
|
# be able to tell what timezone has daylight savings
|
||||||
|
if (time.tzname[0] == time.tzname[1] and
|
||||||
|
time.daylight and found_zone not in ("utc", "gmt")):
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
tz = value
|
||||||
|
break
|
||||||
|
# Deal with the cases where ambiguities arize
|
||||||
|
# don't assume default values for ISO week/year
|
||||||
|
if year is None and iso_year is not None:
|
||||||
|
if iso_week is None or weekday is None:
|
||||||
|
raise ValueError("ISO year directive '%G' must be used with "
|
||||||
|
"the ISO week directive '%V' and a weekday "
|
||||||
|
"directive ('%A', '%a', '%w', or '%u').")
|
||||||
|
if julian is not None:
|
||||||
|
raise ValueError("Day of the year directive '%j' is not "
|
||||||
|
"compatible with ISO year directive '%G'. "
|
||||||
|
"Use '%Y' instead.")
|
||||||
|
elif week_of_year is None and iso_week is not None:
|
||||||
|
if weekday is None:
|
||||||
|
raise ValueError("ISO week directive '%V' must be used with "
|
||||||
|
"the ISO year directive '%G' and a weekday "
|
||||||
|
"directive ('%A', '%a', '%w', or '%u').")
|
||||||
|
else:
|
||||||
|
raise ValueError("ISO week directive '%V' is incompatible with "
|
||||||
|
"the year directive '%Y'. Use the ISO year '%G' "
|
||||||
|
"instead.")
|
||||||
|
|
||||||
|
leap_year_fix = False
|
||||||
|
if year is None and month == 2 and day == 29:
|
||||||
|
year = 1904 # 1904 is first leap year of 20th century
|
||||||
|
leap_year_fix = True
|
||||||
|
elif year is None:
|
||||||
|
year = 1900
|
||||||
|
|
||||||
|
|
||||||
|
# If we know the week of the year and what day of that week, we can figure
|
||||||
|
# out the Julian day of the year.
|
||||||
|
if julian is None and weekday is not None:
|
||||||
|
if week_of_year is not None:
|
||||||
|
week_starts_Mon = True if week_of_year_start == 0 else False
|
||||||
|
julian = _calc_julian_from_U_or_W(year, week_of_year, weekday,
|
||||||
|
week_starts_Mon)
|
||||||
|
elif iso_year is not None and iso_week is not None:
|
||||||
|
year, julian = _calc_julian_from_V(iso_year, iso_week, weekday + 1)
|
||||||
|
if julian is not None and julian <= 0:
|
||||||
|
year -= 1
|
||||||
|
yday = 366 if calendar.isleap(year) else 365
|
||||||
|
julian += yday
|
||||||
|
|
||||||
|
if julian is None:
|
||||||
|
# Cannot pre-calculate datetime_date() since can change in Julian
|
||||||
|
# calculation and thus could have different value for the day of
|
||||||
|
# the week calculation.
|
||||||
|
# Need to add 1 to result since first day of the year is 1, not 0.
|
||||||
|
julian = datetime_date(year, month, day).toordinal() - \
|
||||||
|
datetime_date(year, 1, 1).toordinal() + 1
|
||||||
|
else: # Assume that if they bothered to include Julian day (or if it was
|
||||||
|
# calculated above with year/week/weekday) it will be accurate.
|
||||||
|
datetime_result = datetime_date.fromordinal(
|
||||||
|
(julian - 1) +
|
||||||
|
datetime_date(year, 1, 1).toordinal())
|
||||||
|
year = datetime_result.year
|
||||||
|
month = datetime_result.month
|
||||||
|
day = datetime_result.day
|
||||||
|
if weekday is None:
|
||||||
|
weekday = datetime_date(year, month, day).weekday()
|
||||||
|
# Add timezone info
|
||||||
|
tzname = found_dict.get("Z")
|
||||||
|
|
||||||
|
if leap_year_fix:
|
||||||
|
# the caller didn't supply a year but asked for Feb 29th. We couldn't
|
||||||
|
# use the default of 1900 for computations. We set it back to ensure
|
||||||
|
# that February 29th is smaller than March 1st.
|
||||||
|
year = 1900
|
||||||
|
|
||||||
|
return (year, month, day,
|
||||||
|
hour, minute, second,
|
||||||
|
weekday, julian, tz, tzname, gmtoff), fraction, gmtoff_fraction
|
||||||
|
|
||||||
|
def _strptime_time(data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||||
|
"""Return a time struct based on the input string and the
|
||||||
|
format string."""
|
||||||
|
tt = _strptime(data_string, format)[0]
|
||||||
|
return time.struct_time(tt[:time._STRUCT_TM_ITEMS])
|
||||||
|
|
||||||
|
def _strptime_datetime(cls, data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||||
|
"""Return a class cls instance based on the input string and the
|
||||||
|
format string."""
|
||||||
|
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
|
||||||
|
tzname, gmtoff = tt[-2:]
|
||||||
|
args = tt[:6] + (fraction,)
|
||||||
|
if gmtoff is not None:
|
||||||
|
tzdelta = datetime_timedelta(seconds=gmtoff, microseconds=gmtoff_fraction)
|
||||||
|
if tzname:
|
||||||
|
tz = datetime_timezone(tzdelta, tzname)
|
||||||
|
else:
|
||||||
|
tz = datetime_timezone(tzdelta)
|
||||||
|
args += (tz,)
|
||||||
|
|
||||||
|
return cls(*args)
|
242
Lib/_threading_local.py
Normal file
242
Lib/_threading_local.py
Normal file
|
@ -0,0 +1,242 @@
|
||||||
|
"""Thread-local objects.
|
||||||
|
|
||||||
|
(Note that this module provides a Python version of the threading.local
|
||||||
|
class. Depending on the version of Python you're using, there may be a
|
||||||
|
faster one available. You should always import the `local` class from
|
||||||
|
`threading`.)
|
||||||
|
|
||||||
|
Thread-local objects support the management of thread-local data.
|
||||||
|
If you have data that you want to be local to a thread, simply create
|
||||||
|
a thread-local object and use its attributes:
|
||||||
|
|
||||||
|
>>> mydata = local()
|
||||||
|
>>> mydata.number = 42
|
||||||
|
>>> mydata.number
|
||||||
|
42
|
||||||
|
|
||||||
|
You can also access the local-object's dictionary:
|
||||||
|
|
||||||
|
>>> mydata.__dict__
|
||||||
|
{'number': 42}
|
||||||
|
>>> mydata.__dict__.setdefault('widgets', [])
|
||||||
|
[]
|
||||||
|
>>> mydata.widgets
|
||||||
|
[]
|
||||||
|
|
||||||
|
What's important about thread-local objects is that their data are
|
||||||
|
local to a thread. If we access the data in a different thread:
|
||||||
|
|
||||||
|
>>> log = []
|
||||||
|
>>> def f():
|
||||||
|
... items = sorted(mydata.__dict__.items())
|
||||||
|
... log.append(items)
|
||||||
|
... mydata.number = 11
|
||||||
|
... log.append(mydata.number)
|
||||||
|
|
||||||
|
>>> import threading
|
||||||
|
>>> thread = threading.Thread(target=f)
|
||||||
|
>>> thread.start()
|
||||||
|
>>> thread.join()
|
||||||
|
>>> log
|
||||||
|
[[], 11]
|
||||||
|
|
||||||
|
we get different data. Furthermore, changes made in the other thread
|
||||||
|
don't affect data seen in this thread:
|
||||||
|
|
||||||
|
>>> mydata.number
|
||||||
|
42
|
||||||
|
|
||||||
|
Of course, values you get from a local object, including a __dict__
|
||||||
|
attribute, are for whatever thread was current at the time the
|
||||||
|
attribute was read. For that reason, you generally don't want to save
|
||||||
|
these values across threads, as they apply only to the thread they
|
||||||
|
came from.
|
||||||
|
|
||||||
|
You can create custom local objects by subclassing the local class:
|
||||||
|
|
||||||
|
>>> class MyLocal(local):
|
||||||
|
... number = 2
|
||||||
|
... def __init__(self, **kw):
|
||||||
|
... self.__dict__.update(kw)
|
||||||
|
... def squared(self):
|
||||||
|
... return self.number ** 2
|
||||||
|
|
||||||
|
This can be useful to support default values, methods and
|
||||||
|
initialization. Note that if you define an __init__ method, it will be
|
||||||
|
called each time the local object is used in a separate thread. This
|
||||||
|
is necessary to initialize each thread's dictionary.
|
||||||
|
|
||||||
|
Now if we create a local object:
|
||||||
|
|
||||||
|
>>> mydata = MyLocal(color='red')
|
||||||
|
|
||||||
|
Now we have a default number:
|
||||||
|
|
||||||
|
>>> mydata.number
|
||||||
|
2
|
||||||
|
|
||||||
|
an initial color:
|
||||||
|
|
||||||
|
>>> mydata.color
|
||||||
|
'red'
|
||||||
|
>>> del mydata.color
|
||||||
|
|
||||||
|
And a method that operates on the data:
|
||||||
|
|
||||||
|
>>> mydata.squared()
|
||||||
|
4
|
||||||
|
|
||||||
|
As before, we can access the data in a separate thread:
|
||||||
|
|
||||||
|
>>> log = []
|
||||||
|
>>> thread = threading.Thread(target=f)
|
||||||
|
>>> thread.start()
|
||||||
|
>>> thread.join()
|
||||||
|
>>> log
|
||||||
|
[[('color', 'red')], 11]
|
||||||
|
|
||||||
|
without affecting this thread's data:
|
||||||
|
|
||||||
|
>>> mydata.number
|
||||||
|
2
|
||||||
|
>>> mydata.color
|
||||||
|
Traceback (most recent call last):
|
||||||
|
...
|
||||||
|
AttributeError: 'MyLocal' object has no attribute 'color'
|
||||||
|
|
||||||
|
Note that subclasses can define slots, but they are not thread
|
||||||
|
local. They are shared across threads:
|
||||||
|
|
||||||
|
>>> class MyLocal(local):
|
||||||
|
... __slots__ = 'number'
|
||||||
|
|
||||||
|
>>> mydata = MyLocal()
|
||||||
|
>>> mydata.number = 42
|
||||||
|
>>> mydata.color = 'red'
|
||||||
|
|
||||||
|
So, the separate thread:
|
||||||
|
|
||||||
|
>>> thread = threading.Thread(target=f)
|
||||||
|
>>> thread.start()
|
||||||
|
>>> thread.join()
|
||||||
|
|
||||||
|
affects what we see:
|
||||||
|
|
||||||
|
>>> mydata.number
|
||||||
|
11
|
||||||
|
|
||||||
|
>>> del mydata
|
||||||
|
"""
|
||||||
|
|
||||||
|
from weakref import ref
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
__all__ = ["local"]
|
||||||
|
|
||||||
|
# We need to use objects from the threading module, but the threading
|
||||||
|
# module may also want to use our `local` class, if support for locals
|
||||||
|
# isn't compiled in to the `thread` module. This creates potential problems
|
||||||
|
# with circular imports. For that reason, we don't import `threading`
|
||||||
|
# until the bottom of this file (a hack sufficient to worm around the
|
||||||
|
# potential problems). Note that all platforms on CPython do have support
|
||||||
|
# for locals in the `thread` module, and there is no circular import problem
|
||||||
|
# then, so problems introduced by fiddling the order of imports here won't
|
||||||
|
# manifest.
|
||||||
|
|
||||||
|
class _localimpl:
|
||||||
|
"""A class managing thread-local dicts"""
|
||||||
|
__slots__ = 'key', 'dicts', 'localargs', 'locallock', '__weakref__'
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# The key used in the Thread objects' attribute dicts.
|
||||||
|
# We keep it a string for speed but make it unlikely to clash with
|
||||||
|
# a "real" attribute.
|
||||||
|
self.key = '_threading_local._localimpl.' + str(id(self))
|
||||||
|
# { id(Thread) -> (ref(Thread), thread-local dict) }
|
||||||
|
self.dicts = {}
|
||||||
|
|
||||||
|
def get_dict(self):
|
||||||
|
"""Return the dict for the current thread. Raises KeyError if none
|
||||||
|
defined."""
|
||||||
|
thread = current_thread()
|
||||||
|
return self.dicts[id(thread)][1]
|
||||||
|
|
||||||
|
def create_dict(self):
|
||||||
|
"""Create a new dict for the current thread, and return it."""
|
||||||
|
localdict = {}
|
||||||
|
key = self.key
|
||||||
|
thread = current_thread()
|
||||||
|
idt = id(thread)
|
||||||
|
def local_deleted(_, key=key):
|
||||||
|
# When the localimpl is deleted, remove the thread attribute.
|
||||||
|
thread = wrthread()
|
||||||
|
if thread is not None:
|
||||||
|
del thread.__dict__[key]
|
||||||
|
def thread_deleted(_, idt=idt):
|
||||||
|
# When the thread is deleted, remove the local dict.
|
||||||
|
# Note that this is suboptimal if the thread object gets
|
||||||
|
# caught in a reference loop. We would like to be called
|
||||||
|
# as soon as the OS-level thread ends instead.
|
||||||
|
local = wrlocal()
|
||||||
|
if local is not None:
|
||||||
|
dct = local.dicts.pop(idt)
|
||||||
|
wrlocal = ref(self, local_deleted)
|
||||||
|
wrthread = ref(thread, thread_deleted)
|
||||||
|
thread.__dict__[key] = wrlocal
|
||||||
|
self.dicts[idt] = wrthread, localdict
|
||||||
|
return localdict
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def _patch(self):
|
||||||
|
impl = object.__getattribute__(self, '_local__impl')
|
||||||
|
try:
|
||||||
|
dct = impl.get_dict()
|
||||||
|
except KeyError:
|
||||||
|
dct = impl.create_dict()
|
||||||
|
args, kw = impl.localargs
|
||||||
|
self.__init__(*args, **kw)
|
||||||
|
with impl.locallock:
|
||||||
|
object.__setattr__(self, '__dict__', dct)
|
||||||
|
yield
|
||||||
|
|
||||||
|
|
||||||
|
class local:
|
||||||
|
__slots__ = '_local__impl', '__dict__'
|
||||||
|
|
||||||
|
def __new__(cls, *args, **kw):
|
||||||
|
if (args or kw) and (cls.__init__ is object.__init__):
|
||||||
|
raise TypeError("Initialization arguments are not supported")
|
||||||
|
self = object.__new__(cls)
|
||||||
|
impl = _localimpl()
|
||||||
|
impl.localargs = (args, kw)
|
||||||
|
impl.locallock = RLock()
|
||||||
|
object.__setattr__(self, '_local__impl', impl)
|
||||||
|
# We need to create the thread dict in anticipation of
|
||||||
|
# __init__ being called, to make sure we don't call it
|
||||||
|
# again ourselves.
|
||||||
|
impl.create_dict()
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __getattribute__(self, name):
|
||||||
|
with _patch(self):
|
||||||
|
return object.__getattribute__(self, name)
|
||||||
|
|
||||||
|
def __setattr__(self, name, value):
|
||||||
|
if name == '__dict__':
|
||||||
|
raise AttributeError(
|
||||||
|
"%r object attribute '__dict__' is read-only"
|
||||||
|
% self.__class__.__name__)
|
||||||
|
with _patch(self):
|
||||||
|
return object.__setattr__(self, name, value)
|
||||||
|
|
||||||
|
def __delattr__(self, name):
|
||||||
|
if name == '__dict__':
|
||||||
|
raise AttributeError(
|
||||||
|
"%r object attribute '__dict__' is read-only"
|
||||||
|
% self.__class__.__name__)
|
||||||
|
with _patch(self):
|
||||||
|
return object.__delattr__(self, name)
|
||||||
|
|
||||||
|
|
||||||
|
from threading import current_thread, RLock
|
196
Lib/_weakrefset.py
Normal file
196
Lib/_weakrefset.py
Normal file
|
@ -0,0 +1,196 @@
|
||||||
|
# Access WeakSet through the weakref module.
|
||||||
|
# This code is separated-out because it is needed
|
||||||
|
# by abc.py to load everything else at startup.
|
||||||
|
|
||||||
|
from _weakref import ref
|
||||||
|
|
||||||
|
__all__ = ['WeakSet']
|
||||||
|
|
||||||
|
|
||||||
|
class _IterationGuard:
|
||||||
|
# This context manager registers itself in the current iterators of the
|
||||||
|
# weak container, such as to delay all removals until the context manager
|
||||||
|
# exits.
|
||||||
|
# This technique should be relatively thread-safe (since sets are).
|
||||||
|
|
||||||
|
def __init__(self, weakcontainer):
|
||||||
|
# Don't create cycles
|
||||||
|
self.weakcontainer = ref(weakcontainer)
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
w = self.weakcontainer()
|
||||||
|
if w is not None:
|
||||||
|
w._iterating.add(self)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, e, t, b):
|
||||||
|
w = self.weakcontainer()
|
||||||
|
if w is not None:
|
||||||
|
s = w._iterating
|
||||||
|
s.remove(self)
|
||||||
|
if not s:
|
||||||
|
w._commit_removals()
|
||||||
|
|
||||||
|
|
||||||
|
class WeakSet:
|
||||||
|
def __init__(self, data=None):
|
||||||
|
self.data = set()
|
||||||
|
def _remove(item, selfref=ref(self)):
|
||||||
|
self = selfref()
|
||||||
|
if self is not None:
|
||||||
|
if self._iterating:
|
||||||
|
self._pending_removals.append(item)
|
||||||
|
else:
|
||||||
|
self.data.discard(item)
|
||||||
|
self._remove = _remove
|
||||||
|
# A list of keys to be removed
|
||||||
|
self._pending_removals = []
|
||||||
|
self._iterating = set()
|
||||||
|
if data is not None:
|
||||||
|
self.update(data)
|
||||||
|
|
||||||
|
def _commit_removals(self):
|
||||||
|
l = self._pending_removals
|
||||||
|
discard = self.data.discard
|
||||||
|
while l:
|
||||||
|
discard(l.pop())
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
with _IterationGuard(self):
|
||||||
|
for itemref in self.data:
|
||||||
|
item = itemref()
|
||||||
|
if item is not None:
|
||||||
|
# Caveat: the iterator will keep a strong reference to
|
||||||
|
# `item` until it is resumed or closed.
|
||||||
|
yield item
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return len(self.data) - len(self._pending_removals)
|
||||||
|
|
||||||
|
def __contains__(self, item):
|
||||||
|
try:
|
||||||
|
wr = ref(item)
|
||||||
|
except TypeError:
|
||||||
|
return False
|
||||||
|
return wr in self.data
|
||||||
|
|
||||||
|
def __reduce__(self):
|
||||||
|
return (self.__class__, (list(self),),
|
||||||
|
getattr(self, '__dict__', None))
|
||||||
|
|
||||||
|
def add(self, item):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
self.data.add(ref(item, self._remove))
|
||||||
|
|
||||||
|
def clear(self):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
self.data.clear()
|
||||||
|
|
||||||
|
def copy(self):
|
||||||
|
return self.__class__(self)
|
||||||
|
|
||||||
|
def pop(self):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
itemref = self.data.pop()
|
||||||
|
except KeyError:
|
||||||
|
raise KeyError('pop from empty WeakSet') from None
|
||||||
|
item = itemref()
|
||||||
|
if item is not None:
|
||||||
|
return item
|
||||||
|
|
||||||
|
def remove(self, item):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
self.data.remove(ref(item))
|
||||||
|
|
||||||
|
def discard(self, item):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
self.data.discard(ref(item))
|
||||||
|
|
||||||
|
def update(self, other):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
for element in other:
|
||||||
|
self.add(element)
|
||||||
|
|
||||||
|
def __ior__(self, other):
|
||||||
|
self.update(other)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def difference(self, other):
|
||||||
|
newset = self.copy()
|
||||||
|
newset.difference_update(other)
|
||||||
|
return newset
|
||||||
|
__sub__ = difference
|
||||||
|
|
||||||
|
def difference_update(self, other):
|
||||||
|
self.__isub__(other)
|
||||||
|
def __isub__(self, other):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
if self is other:
|
||||||
|
self.data.clear()
|
||||||
|
else:
|
||||||
|
self.data.difference_update(ref(item) for item in other)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def intersection(self, other):
|
||||||
|
return self.__class__(item for item in other if item in self)
|
||||||
|
__and__ = intersection
|
||||||
|
|
||||||
|
def intersection_update(self, other):
|
||||||
|
self.__iand__(other)
|
||||||
|
def __iand__(self, other):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
self.data.intersection_update(ref(item) for item in other)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def issubset(self, other):
|
||||||
|
return self.data.issubset(ref(item) for item in other)
|
||||||
|
__le__ = issubset
|
||||||
|
|
||||||
|
def __lt__(self, other):
|
||||||
|
return self.data < set(map(ref, other))
|
||||||
|
|
||||||
|
def issuperset(self, other):
|
||||||
|
return self.data.issuperset(ref(item) for item in other)
|
||||||
|
__ge__ = issuperset
|
||||||
|
|
||||||
|
def __gt__(self, other):
|
||||||
|
return self.data > set(map(ref, other))
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
if not isinstance(other, self.__class__):
|
||||||
|
return NotImplemented
|
||||||
|
return self.data == set(map(ref, other))
|
||||||
|
|
||||||
|
def symmetric_difference(self, other):
|
||||||
|
newset = self.copy()
|
||||||
|
newset.symmetric_difference_update(other)
|
||||||
|
return newset
|
||||||
|
__xor__ = symmetric_difference
|
||||||
|
|
||||||
|
def symmetric_difference_update(self, other):
|
||||||
|
self.__ixor__(other)
|
||||||
|
def __ixor__(self, other):
|
||||||
|
if self._pending_removals:
|
||||||
|
self._commit_removals()
|
||||||
|
if self is other:
|
||||||
|
self.data.clear()
|
||||||
|
else:
|
||||||
|
self.data.symmetric_difference_update(ref(item, self._remove) for item in other)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def union(self, other):
|
||||||
|
return self.__class__(e for s in (self, other) for e in s)
|
||||||
|
__or__ = union
|
||||||
|
|
||||||
|
def isdisjoint(self, other):
|
||||||
|
return len(self.intersection(other)) == 0
|
170
Lib/abc.py
Normal file
170
Lib/abc.py
Normal file
|
@ -0,0 +1,170 @@
|
||||||
|
# Copyright 2007 Google, Inc. All Rights Reserved.
|
||||||
|
# Licensed to PSF under a Contributor Agreement.
|
||||||
|
|
||||||
|
"""Abstract Base Classes (ABCs) according to PEP 3119."""
|
||||||
|
|
||||||
|
|
||||||
|
def abstractmethod(funcobj):
|
||||||
|
"""A decorator indicating abstract methods.
|
||||||
|
|
||||||
|
Requires that the metaclass is ABCMeta or derived from it. A
|
||||||
|
class that has a metaclass derived from ABCMeta cannot be
|
||||||
|
instantiated unless all of its abstract methods are overridden.
|
||||||
|
The abstract methods can be called using any of the normal
|
||||||
|
'super' call mechanisms.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
class C(metaclass=ABCMeta):
|
||||||
|
@abstractmethod
|
||||||
|
def my_abstract_method(self, ...):
|
||||||
|
...
|
||||||
|
"""
|
||||||
|
funcobj.__isabstractmethod__ = True
|
||||||
|
return funcobj
|
||||||
|
|
||||||
|
|
||||||
|
class abstractclassmethod(classmethod):
|
||||||
|
"""A decorator indicating abstract classmethods.
|
||||||
|
|
||||||
|
Similar to abstractmethod.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
class C(metaclass=ABCMeta):
|
||||||
|
@abstractclassmethod
|
||||||
|
def my_abstract_classmethod(cls, ...):
|
||||||
|
...
|
||||||
|
|
||||||
|
'abstractclassmethod' is deprecated. Use 'classmethod' with
|
||||||
|
'abstractmethod' instead.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__isabstractmethod__ = True
|
||||||
|
|
||||||
|
def __init__(self, callable):
|
||||||
|
callable.__isabstractmethod__ = True
|
||||||
|
super().__init__(callable)
|
||||||
|
|
||||||
|
|
||||||
|
class abstractstaticmethod(staticmethod):
|
||||||
|
"""A decorator indicating abstract staticmethods.
|
||||||
|
|
||||||
|
Similar to abstractmethod.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
class C(metaclass=ABCMeta):
|
||||||
|
@abstractstaticmethod
|
||||||
|
def my_abstract_staticmethod(...):
|
||||||
|
...
|
||||||
|
|
||||||
|
'abstractstaticmethod' is deprecated. Use 'staticmethod' with
|
||||||
|
'abstractmethod' instead.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__isabstractmethod__ = True
|
||||||
|
|
||||||
|
def __init__(self, callable):
|
||||||
|
callable.__isabstractmethod__ = True
|
||||||
|
super().__init__(callable)
|
||||||
|
|
||||||
|
|
||||||
|
class abstractproperty(property):
|
||||||
|
"""A decorator indicating abstract properties.
|
||||||
|
|
||||||
|
Requires that the metaclass is ABCMeta or derived from it. A
|
||||||
|
class that has a metaclass derived from ABCMeta cannot be
|
||||||
|
instantiated unless all of its abstract properties are overridden.
|
||||||
|
The abstract properties can be called using any of the normal
|
||||||
|
'super' call mechanisms.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
class C(metaclass=ABCMeta):
|
||||||
|
@abstractproperty
|
||||||
|
def my_abstract_property(self):
|
||||||
|
...
|
||||||
|
|
||||||
|
This defines a read-only property; you can also define a read-write
|
||||||
|
abstract property using the 'long' form of property declaration:
|
||||||
|
|
||||||
|
class C(metaclass=ABCMeta):
|
||||||
|
def getx(self): ...
|
||||||
|
def setx(self, value): ...
|
||||||
|
x = abstractproperty(getx, setx)
|
||||||
|
|
||||||
|
'abstractproperty' is deprecated. Use 'property' with 'abstractmethod'
|
||||||
|
instead.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__isabstractmethod__ = True
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
from _abc import (get_cache_token, _abc_init, _abc_register,
|
||||||
|
_abc_instancecheck, _abc_subclasscheck, _get_dump,
|
||||||
|
_reset_registry, _reset_caches)
|
||||||
|
except ImportError:
|
||||||
|
from _py_abc import ABCMeta, get_cache_token
|
||||||
|
ABCMeta.__module__ = 'abc'
|
||||||
|
else:
|
||||||
|
class ABCMeta(type):
|
||||||
|
"""Metaclass for defining Abstract Base Classes (ABCs).
|
||||||
|
|
||||||
|
Use this metaclass to create an ABC. An ABC can be subclassed
|
||||||
|
directly, and then acts as a mix-in class. You can also register
|
||||||
|
unrelated concrete classes (even built-in classes) and unrelated
|
||||||
|
ABCs as 'virtual subclasses' -- these and their descendants will
|
||||||
|
be considered subclasses of the registering ABC by the built-in
|
||||||
|
issubclass() function, but the registering ABC won't show up in
|
||||||
|
their MRO (Method Resolution Order) nor will method
|
||||||
|
implementations defined by the registering ABC be callable (not
|
||||||
|
even via super()).
|
||||||
|
"""
|
||||||
|
def __new__(mcls, name, bases, namespace, **kwargs):
|
||||||
|
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
|
||||||
|
_abc_init(cls)
|
||||||
|
return cls
|
||||||
|
|
||||||
|
def register(cls, subclass):
|
||||||
|
"""Register a virtual subclass of an ABC.
|
||||||
|
|
||||||
|
Returns the subclass, to allow usage as a class decorator.
|
||||||
|
"""
|
||||||
|
return _abc_register(cls, subclass)
|
||||||
|
|
||||||
|
def __instancecheck__(cls, instance):
|
||||||
|
"""Override for isinstance(instance, cls)."""
|
||||||
|
return _abc_instancecheck(cls, instance)
|
||||||
|
|
||||||
|
def __subclasscheck__(cls, subclass):
|
||||||
|
"""Override for issubclass(subclass, cls)."""
|
||||||
|
return _abc_subclasscheck(cls, subclass)
|
||||||
|
|
||||||
|
def _dump_registry(cls, file=None):
|
||||||
|
"""Debug helper to print the ABC registry."""
|
||||||
|
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
|
||||||
|
print(f"Inv. counter: {get_cache_token()}", file=file)
|
||||||
|
(_abc_registry, _abc_cache, _abc_negative_cache,
|
||||||
|
_abc_negative_cache_version) = _get_dump(cls)
|
||||||
|
print(f"_abc_registry: {_abc_registry!r}", file=file)
|
||||||
|
print(f"_abc_cache: {_abc_cache!r}", file=file)
|
||||||
|
print(f"_abc_negative_cache: {_abc_negative_cache!r}", file=file)
|
||||||
|
print(f"_abc_negative_cache_version: {_abc_negative_cache_version!r}",
|
||||||
|
file=file)
|
||||||
|
|
||||||
|
def _abc_registry_clear(cls):
|
||||||
|
"""Clear the registry (for debugging or testing)."""
|
||||||
|
_reset_registry(cls)
|
||||||
|
|
||||||
|
def _abc_caches_clear(cls):
|
||||||
|
"""Clear the caches (for debugging or testing)."""
|
||||||
|
_reset_caches(cls)
|
||||||
|
|
||||||
|
|
||||||
|
class ABC(metaclass=ABCMeta):
|
||||||
|
"""Helper class that provides a standard way to create an ABC using
|
||||||
|
inheritance.
|
||||||
|
"""
|
||||||
|
__slots__ = ()
|
951
Lib/aifc.py
Normal file
951
Lib/aifc.py
Normal file
|
@ -0,0 +1,951 @@
|
||||||
|
"""Stuff to parse AIFF-C and AIFF files.
|
||||||
|
|
||||||
|
Unless explicitly stated otherwise, the description below is true
|
||||||
|
both for AIFF-C files and AIFF files.
|
||||||
|
|
||||||
|
An AIFF-C file has the following structure.
|
||||||
|
|
||||||
|
+-----------------+
|
||||||
|
| FORM |
|
||||||
|
+-----------------+
|
||||||
|
| <size> |
|
||||||
|
+----+------------+
|
||||||
|
| | AIFC |
|
||||||
|
| +------------+
|
||||||
|
| | <chunks> |
|
||||||
|
| | . |
|
||||||
|
| | . |
|
||||||
|
| | . |
|
||||||
|
+----+------------+
|
||||||
|
|
||||||
|
An AIFF file has the string "AIFF" instead of "AIFC".
|
||||||
|
|
||||||
|
A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
|
||||||
|
big endian order), followed by the data. The size field does not include
|
||||||
|
the size of the 8 byte header.
|
||||||
|
|
||||||
|
The following chunk types are recognized.
|
||||||
|
|
||||||
|
FVER
|
||||||
|
<version number of AIFF-C defining document> (AIFF-C only).
|
||||||
|
MARK
|
||||||
|
<# of markers> (2 bytes)
|
||||||
|
list of markers:
|
||||||
|
<marker ID> (2 bytes, must be > 0)
|
||||||
|
<position> (4 bytes)
|
||||||
|
<marker name> ("pstring")
|
||||||
|
COMM
|
||||||
|
<# of channels> (2 bytes)
|
||||||
|
<# of sound frames> (4 bytes)
|
||||||
|
<size of the samples> (2 bytes)
|
||||||
|
<sampling frequency> (10 bytes, IEEE 80-bit extended
|
||||||
|
floating point)
|
||||||
|
in AIFF-C files only:
|
||||||
|
<compression type> (4 bytes)
|
||||||
|
<human-readable version of compression type> ("pstring")
|
||||||
|
SSND
|
||||||
|
<offset> (4 bytes, not used by this program)
|
||||||
|
<blocksize> (4 bytes, not used by this program)
|
||||||
|
<sound data>
|
||||||
|
|
||||||
|
A pstring consists of 1 byte length, a string of characters, and 0 or 1
|
||||||
|
byte pad to make the total length even.
|
||||||
|
|
||||||
|
Usage.
|
||||||
|
|
||||||
|
Reading AIFF files:
|
||||||
|
f = aifc.open(file, 'r')
|
||||||
|
where file is either the name of a file or an open file pointer.
|
||||||
|
The open file pointer must have methods read(), seek(), and close().
|
||||||
|
In some types of audio files, if the setpos() method is not used,
|
||||||
|
the seek() method is not necessary.
|
||||||
|
|
||||||
|
This returns an instance of a class with the following public methods:
|
||||||
|
getnchannels() -- returns number of audio channels (1 for
|
||||||
|
mono, 2 for stereo)
|
||||||
|
getsampwidth() -- returns sample width in bytes
|
||||||
|
getframerate() -- returns sampling frequency
|
||||||
|
getnframes() -- returns number of audio frames
|
||||||
|
getcomptype() -- returns compression type ('NONE' for AIFF files)
|
||||||
|
getcompname() -- returns human-readable version of
|
||||||
|
compression type ('not compressed' for AIFF files)
|
||||||
|
getparams() -- returns a namedtuple consisting of all of the
|
||||||
|
above in the above order
|
||||||
|
getmarkers() -- get the list of marks in the audio file or None
|
||||||
|
if there are no marks
|
||||||
|
getmark(id) -- get mark with the specified id (raises an error
|
||||||
|
if the mark does not exist)
|
||||||
|
readframes(n) -- returns at most n frames of audio
|
||||||
|
rewind() -- rewind to the beginning of the audio stream
|
||||||
|
setpos(pos) -- seek to the specified position
|
||||||
|
tell() -- return the current position
|
||||||
|
close() -- close the instance (make it unusable)
|
||||||
|
The position returned by tell(), the position given to setpos() and
|
||||||
|
the position of marks are all compatible and have nothing to do with
|
||||||
|
the actual position in the file.
|
||||||
|
The close() method is called automatically when the class instance
|
||||||
|
is destroyed.
|
||||||
|
|
||||||
|
Writing AIFF files:
|
||||||
|
f = aifc.open(file, 'w')
|
||||||
|
where file is either the name of a file or an open file pointer.
|
||||||
|
The open file pointer must have methods write(), tell(), seek(), and
|
||||||
|
close().
|
||||||
|
|
||||||
|
This returns an instance of a class with the following public methods:
|
||||||
|
aiff() -- create an AIFF file (AIFF-C default)
|
||||||
|
aifc() -- create an AIFF-C file
|
||||||
|
setnchannels(n) -- set the number of channels
|
||||||
|
setsampwidth(n) -- set the sample width
|
||||||
|
setframerate(n) -- set the frame rate
|
||||||
|
setnframes(n) -- set the number of frames
|
||||||
|
setcomptype(type, name)
|
||||||
|
-- set the compression type and the
|
||||||
|
human-readable compression type
|
||||||
|
setparams(tuple)
|
||||||
|
-- set all parameters at once
|
||||||
|
setmark(id, pos, name)
|
||||||
|
-- add specified mark to the list of marks
|
||||||
|
tell() -- return current position in output file (useful
|
||||||
|
in combination with setmark())
|
||||||
|
writeframesraw(data)
|
||||||
|
-- write audio frames without pathing up the
|
||||||
|
file header
|
||||||
|
writeframes(data)
|
||||||
|
-- write audio frames and patch up the file header
|
||||||
|
close() -- patch up the file header and close the
|
||||||
|
output file
|
||||||
|
You should set the parameters before the first writeframesraw or
|
||||||
|
writeframes. The total number of frames does not need to be set,
|
||||||
|
but when it is set to the correct value, the header does not have to
|
||||||
|
be patched up.
|
||||||
|
It is best to first set all parameters, perhaps possibly the
|
||||||
|
compression type, and then write audio frames using writeframesraw.
|
||||||
|
When all frames have been written, either call writeframes(b'') or
|
||||||
|
close() to patch up the sizes in the header.
|
||||||
|
Marks can be added anytime. If there are any marks, you must call
|
||||||
|
close() after all frames have been written.
|
||||||
|
The close() method is called automatically when the class instance
|
||||||
|
is destroyed.
|
||||||
|
|
||||||
|
When a file is opened with the extension '.aiff', an AIFF file is
|
||||||
|
written, otherwise an AIFF-C file is written. This default can be
|
||||||
|
changed by calling aiff() or aifc() before the first writeframes or
|
||||||
|
writeframesraw.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import struct
|
||||||
|
import builtins
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
__all__ = ["Error", "open", "openfp"]
|
||||||
|
|
||||||
|
class Error(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
_AIFC_version = 0xA2805140 # Version 1 of AIFF-C
|
||||||
|
|
||||||
|
def _read_long(file):
|
||||||
|
try:
|
||||||
|
return struct.unpack('>l', file.read(4))[0]
|
||||||
|
except struct.error:
|
||||||
|
raise EOFError from None
|
||||||
|
|
||||||
|
def _read_ulong(file):
|
||||||
|
try:
|
||||||
|
return struct.unpack('>L', file.read(4))[0]
|
||||||
|
except struct.error:
|
||||||
|
raise EOFError from None
|
||||||
|
|
||||||
|
def _read_short(file):
|
||||||
|
try:
|
||||||
|
return struct.unpack('>h', file.read(2))[0]
|
||||||
|
except struct.error:
|
||||||
|
raise EOFError from None
|
||||||
|
|
||||||
|
def _read_ushort(file):
|
||||||
|
try:
|
||||||
|
return struct.unpack('>H', file.read(2))[0]
|
||||||
|
except struct.error:
|
||||||
|
raise EOFError from None
|
||||||
|
|
||||||
|
def _read_string(file):
|
||||||
|
length = ord(file.read(1))
|
||||||
|
if length == 0:
|
||||||
|
data = b''
|
||||||
|
else:
|
||||||
|
data = file.read(length)
|
||||||
|
if length & 1 == 0:
|
||||||
|
dummy = file.read(1)
|
||||||
|
return data
|
||||||
|
|
||||||
|
_HUGE_VAL = 1.79769313486231e+308 # See <limits.h>
|
||||||
|
|
||||||
|
def _read_float(f): # 10 bytes
|
||||||
|
expon = _read_short(f) # 2 bytes
|
||||||
|
sign = 1
|
||||||
|
if expon < 0:
|
||||||
|
sign = -1
|
||||||
|
expon = expon + 0x8000
|
||||||
|
himant = _read_ulong(f) # 4 bytes
|
||||||
|
lomant = _read_ulong(f) # 4 bytes
|
||||||
|
if expon == himant == lomant == 0:
|
||||||
|
f = 0.0
|
||||||
|
elif expon == 0x7FFF:
|
||||||
|
f = _HUGE_VAL
|
||||||
|
else:
|
||||||
|
expon = expon - 16383
|
||||||
|
f = (himant * 0x100000000 + lomant) * pow(2.0, expon - 63)
|
||||||
|
return sign * f
|
||||||
|
|
||||||
|
def _write_short(f, x):
|
||||||
|
f.write(struct.pack('>h', x))
|
||||||
|
|
||||||
|
def _write_ushort(f, x):
|
||||||
|
f.write(struct.pack('>H', x))
|
||||||
|
|
||||||
|
def _write_long(f, x):
|
||||||
|
f.write(struct.pack('>l', x))
|
||||||
|
|
||||||
|
def _write_ulong(f, x):
|
||||||
|
f.write(struct.pack('>L', x))
|
||||||
|
|
||||||
|
def _write_string(f, s):
|
||||||
|
if len(s) > 255:
|
||||||
|
raise ValueError("string exceeds maximum pstring length")
|
||||||
|
f.write(struct.pack('B', len(s)))
|
||||||
|
f.write(s)
|
||||||
|
if len(s) & 1 == 0:
|
||||||
|
f.write(b'\x00')
|
||||||
|
|
||||||
|
def _write_float(f, x):
|
||||||
|
import math
|
||||||
|
if x < 0:
|
||||||
|
sign = 0x8000
|
||||||
|
x = x * -1
|
||||||
|
else:
|
||||||
|
sign = 0
|
||||||
|
if x == 0:
|
||||||
|
expon = 0
|
||||||
|
himant = 0
|
||||||
|
lomant = 0
|
||||||
|
else:
|
||||||
|
fmant, expon = math.frexp(x)
|
||||||
|
if expon > 16384 or fmant >= 1 or fmant != fmant: # Infinity or NaN
|
||||||
|
expon = sign|0x7FFF
|
||||||
|
himant = 0
|
||||||
|
lomant = 0
|
||||||
|
else: # Finite
|
||||||
|
expon = expon + 16382
|
||||||
|
if expon < 0: # denormalized
|
||||||
|
fmant = math.ldexp(fmant, expon)
|
||||||
|
expon = 0
|
||||||
|
expon = expon | sign
|
||||||
|
fmant = math.ldexp(fmant, 32)
|
||||||
|
fsmant = math.floor(fmant)
|
||||||
|
himant = int(fsmant)
|
||||||
|
fmant = math.ldexp(fmant - fsmant, 32)
|
||||||
|
fsmant = math.floor(fmant)
|
||||||
|
lomant = int(fsmant)
|
||||||
|
_write_ushort(f, expon)
|
||||||
|
_write_ulong(f, himant)
|
||||||
|
_write_ulong(f, lomant)
|
||||||
|
|
||||||
|
from chunk import Chunk
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
_aifc_params = namedtuple('_aifc_params',
|
||||||
|
'nchannels sampwidth framerate nframes comptype compname')
|
||||||
|
|
||||||
|
_aifc_params.nchannels.__doc__ = 'Number of audio channels (1 for mono, 2 for stereo)'
|
||||||
|
_aifc_params.sampwidth.__doc__ = 'Sample width in bytes'
|
||||||
|
_aifc_params.framerate.__doc__ = 'Sampling frequency'
|
||||||
|
_aifc_params.nframes.__doc__ = 'Number of audio frames'
|
||||||
|
_aifc_params.comptype.__doc__ = 'Compression type ("NONE" for AIFF files)'
|
||||||
|
_aifc_params.compname.__doc__ = ("""\
|
||||||
|
A human-readable version of the compression type
|
||||||
|
('not compressed' for AIFF files)""")
|
||||||
|
|
||||||
|
|
||||||
|
class Aifc_read:
|
||||||
|
# Variables used in this class:
|
||||||
|
#
|
||||||
|
# These variables are available to the user though appropriate
|
||||||
|
# methods of this class:
|
||||||
|
# _file -- the open file with methods read(), close(), and seek()
|
||||||
|
# set through the __init__() method
|
||||||
|
# _nchannels -- the number of audio channels
|
||||||
|
# available through the getnchannels() method
|
||||||
|
# _nframes -- the number of audio frames
|
||||||
|
# available through the getnframes() method
|
||||||
|
# _sampwidth -- the number of bytes per audio sample
|
||||||
|
# available through the getsampwidth() method
|
||||||
|
# _framerate -- the sampling frequency
|
||||||
|
# available through the getframerate() method
|
||||||
|
# _comptype -- the AIFF-C compression type ('NONE' if AIFF)
|
||||||
|
# available through the getcomptype() method
|
||||||
|
# _compname -- the human-readable AIFF-C compression type
|
||||||
|
# available through the getcomptype() method
|
||||||
|
# _markers -- the marks in the audio file
|
||||||
|
# available through the getmarkers() and getmark()
|
||||||
|
# methods
|
||||||
|
# _soundpos -- the position in the audio stream
|
||||||
|
# available through the tell() method, set through the
|
||||||
|
# setpos() method
|
||||||
|
#
|
||||||
|
# These variables are used internally only:
|
||||||
|
# _version -- the AIFF-C version number
|
||||||
|
# _decomp -- the decompressor from builtin module cl
|
||||||
|
# _comm_chunk_read -- 1 iff the COMM chunk has been read
|
||||||
|
# _aifc -- 1 iff reading an AIFF-C file
|
||||||
|
# _ssnd_seek_needed -- 1 iff positioned correctly in audio
|
||||||
|
# file for readframes()
|
||||||
|
# _ssnd_chunk -- instantiation of a chunk class for the SSND chunk
|
||||||
|
# _framesize -- size of one frame in the file
|
||||||
|
|
||||||
|
_file = None # Set here since __del__ checks it
|
||||||
|
|
||||||
|
def initfp(self, file):
|
||||||
|
self._version = 0
|
||||||
|
self._convert = None
|
||||||
|
self._markers = []
|
||||||
|
self._soundpos = 0
|
||||||
|
self._file = file
|
||||||
|
chunk = Chunk(file)
|
||||||
|
if chunk.getname() != b'FORM':
|
||||||
|
raise Error('file does not start with FORM id')
|
||||||
|
formdata = chunk.read(4)
|
||||||
|
if formdata == b'AIFF':
|
||||||
|
self._aifc = 0
|
||||||
|
elif formdata == b'AIFC':
|
||||||
|
self._aifc = 1
|
||||||
|
else:
|
||||||
|
raise Error('not an AIFF or AIFF-C file')
|
||||||
|
self._comm_chunk_read = 0
|
||||||
|
self._ssnd_chunk = None
|
||||||
|
while 1:
|
||||||
|
self._ssnd_seek_needed = 1
|
||||||
|
try:
|
||||||
|
chunk = Chunk(self._file)
|
||||||
|
except EOFError:
|
||||||
|
break
|
||||||
|
chunkname = chunk.getname()
|
||||||
|
if chunkname == b'COMM':
|
||||||
|
self._read_comm_chunk(chunk)
|
||||||
|
self._comm_chunk_read = 1
|
||||||
|
elif chunkname == b'SSND':
|
||||||
|
self._ssnd_chunk = chunk
|
||||||
|
dummy = chunk.read(8)
|
||||||
|
self._ssnd_seek_needed = 0
|
||||||
|
elif chunkname == b'FVER':
|
||||||
|
self._version = _read_ulong(chunk)
|
||||||
|
elif chunkname == b'MARK':
|
||||||
|
self._readmark(chunk)
|
||||||
|
chunk.skip()
|
||||||
|
if not self._comm_chunk_read or not self._ssnd_chunk:
|
||||||
|
raise Error('COMM chunk and/or SSND chunk missing')
|
||||||
|
|
||||||
|
def __init__(self, f):
|
||||||
|
if isinstance(f, str):
|
||||||
|
file_object = builtins.open(f, 'rb')
|
||||||
|
try:
|
||||||
|
self.initfp(file_object)
|
||||||
|
except:
|
||||||
|
file_object.close()
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
# assume it is an open file object already
|
||||||
|
self.initfp(f)
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
#
|
||||||
|
# User visible methods.
|
||||||
|
#
|
||||||
|
def getfp(self):
|
||||||
|
return self._file
|
||||||
|
|
||||||
|
def rewind(self):
|
||||||
|
self._ssnd_seek_needed = 1
|
||||||
|
self._soundpos = 0
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
file = self._file
|
||||||
|
if file is not None:
|
||||||
|
self._file = None
|
||||||
|
file.close()
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
return self._soundpos
|
||||||
|
|
||||||
|
def getnchannels(self):
|
||||||
|
return self._nchannels
|
||||||
|
|
||||||
|
def getnframes(self):
|
||||||
|
return self._nframes
|
||||||
|
|
||||||
|
def getsampwidth(self):
|
||||||
|
return self._sampwidth
|
||||||
|
|
||||||
|
def getframerate(self):
|
||||||
|
return self._framerate
|
||||||
|
|
||||||
|
def getcomptype(self):
|
||||||
|
return self._comptype
|
||||||
|
|
||||||
|
def getcompname(self):
|
||||||
|
return self._compname
|
||||||
|
|
||||||
|
## def getversion(self):
|
||||||
|
## return self._version
|
||||||
|
|
||||||
|
def getparams(self):
|
||||||
|
return _aifc_params(self.getnchannels(), self.getsampwidth(),
|
||||||
|
self.getframerate(), self.getnframes(),
|
||||||
|
self.getcomptype(), self.getcompname())
|
||||||
|
|
||||||
|
def getmarkers(self):
|
||||||
|
if len(self._markers) == 0:
|
||||||
|
return None
|
||||||
|
return self._markers
|
||||||
|
|
||||||
|
def getmark(self, id):
|
||||||
|
for marker in self._markers:
|
||||||
|
if id == marker[0]:
|
||||||
|
return marker
|
||||||
|
raise Error('marker {0!r} does not exist'.format(id))
|
||||||
|
|
||||||
|
def setpos(self, pos):
|
||||||
|
if pos < 0 or pos > self._nframes:
|
||||||
|
raise Error('position not in range')
|
||||||
|
self._soundpos = pos
|
||||||
|
self._ssnd_seek_needed = 1
|
||||||
|
|
||||||
|
def readframes(self, nframes):
|
||||||
|
if self._ssnd_seek_needed:
|
||||||
|
self._ssnd_chunk.seek(0)
|
||||||
|
dummy = self._ssnd_chunk.read(8)
|
||||||
|
pos = self._soundpos * self._framesize
|
||||||
|
if pos:
|
||||||
|
self._ssnd_chunk.seek(pos + 8)
|
||||||
|
self._ssnd_seek_needed = 0
|
||||||
|
if nframes == 0:
|
||||||
|
return b''
|
||||||
|
data = self._ssnd_chunk.read(nframes * self._framesize)
|
||||||
|
if self._convert and data:
|
||||||
|
data = self._convert(data)
|
||||||
|
self._soundpos = self._soundpos + len(data) // (self._nchannels
|
||||||
|
* self._sampwidth)
|
||||||
|
return data
|
||||||
|
|
||||||
|
#
|
||||||
|
# Internal methods.
|
||||||
|
#
|
||||||
|
|
||||||
|
def _alaw2lin(self, data):
|
||||||
|
import audioop
|
||||||
|
return audioop.alaw2lin(data, 2)
|
||||||
|
|
||||||
|
def _ulaw2lin(self, data):
|
||||||
|
import audioop
|
||||||
|
return audioop.ulaw2lin(data, 2)
|
||||||
|
|
||||||
|
def _adpcm2lin(self, data):
|
||||||
|
import audioop
|
||||||
|
if not hasattr(self, '_adpcmstate'):
|
||||||
|
# first time
|
||||||
|
self._adpcmstate = None
|
||||||
|
data, self._adpcmstate = audioop.adpcm2lin(data, 2, self._adpcmstate)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def _read_comm_chunk(self, chunk):
|
||||||
|
self._nchannels = _read_short(chunk)
|
||||||
|
self._nframes = _read_long(chunk)
|
||||||
|
self._sampwidth = (_read_short(chunk) + 7) // 8
|
||||||
|
self._framerate = int(_read_float(chunk))
|
||||||
|
if self._sampwidth <= 0:
|
||||||
|
raise Error('bad sample width')
|
||||||
|
if self._nchannels <= 0:
|
||||||
|
raise Error('bad # of channels')
|
||||||
|
self._framesize = self._nchannels * self._sampwidth
|
||||||
|
if self._aifc:
|
||||||
|
#DEBUG: SGI's soundeditor produces a bad size :-(
|
||||||
|
kludge = 0
|
||||||
|
if chunk.chunksize == 18:
|
||||||
|
kludge = 1
|
||||||
|
warnings.warn('Warning: bad COMM chunk size')
|
||||||
|
chunk.chunksize = 23
|
||||||
|
#DEBUG end
|
||||||
|
self._comptype = chunk.read(4)
|
||||||
|
#DEBUG start
|
||||||
|
if kludge:
|
||||||
|
length = ord(chunk.file.read(1))
|
||||||
|
if length & 1 == 0:
|
||||||
|
length = length + 1
|
||||||
|
chunk.chunksize = chunk.chunksize + length
|
||||||
|
chunk.file.seek(-1, 1)
|
||||||
|
#DEBUG end
|
||||||
|
self._compname = _read_string(chunk)
|
||||||
|
if self._comptype != b'NONE':
|
||||||
|
if self._comptype == b'G722':
|
||||||
|
self._convert = self._adpcm2lin
|
||||||
|
elif self._comptype in (b'ulaw', b'ULAW'):
|
||||||
|
self._convert = self._ulaw2lin
|
||||||
|
elif self._comptype in (b'alaw', b'ALAW'):
|
||||||
|
self._convert = self._alaw2lin
|
||||||
|
else:
|
||||||
|
raise Error('unsupported compression type')
|
||||||
|
self._sampwidth = 2
|
||||||
|
else:
|
||||||
|
self._comptype = b'NONE'
|
||||||
|
self._compname = b'not compressed'
|
||||||
|
|
||||||
|
def _readmark(self, chunk):
|
||||||
|
nmarkers = _read_short(chunk)
|
||||||
|
# Some files appear to contain invalid counts.
|
||||||
|
# Cope with this by testing for EOF.
|
||||||
|
try:
|
||||||
|
for i in range(nmarkers):
|
||||||
|
id = _read_short(chunk)
|
||||||
|
pos = _read_long(chunk)
|
||||||
|
name = _read_string(chunk)
|
||||||
|
if pos or name:
|
||||||
|
# some files appear to have
|
||||||
|
# dummy markers consisting of
|
||||||
|
# a position 0 and name ''
|
||||||
|
self._markers.append((id, pos, name))
|
||||||
|
except EOFError:
|
||||||
|
w = ('Warning: MARK chunk contains only %s marker%s instead of %s' %
|
||||||
|
(len(self._markers), '' if len(self._markers) == 1 else 's',
|
||||||
|
nmarkers))
|
||||||
|
warnings.warn(w)
|
||||||
|
|
||||||
|
class Aifc_write:
|
||||||
|
# Variables used in this class:
|
||||||
|
#
|
||||||
|
# These variables are user settable through appropriate methods
|
||||||
|
# of this class:
|
||||||
|
# _file -- the open file with methods write(), close(), tell(), seek()
|
||||||
|
# set through the __init__() method
|
||||||
|
# _comptype -- the AIFF-C compression type ('NONE' in AIFF)
|
||||||
|
# set through the setcomptype() or setparams() method
|
||||||
|
# _compname -- the human-readable AIFF-C compression type
|
||||||
|
# set through the setcomptype() or setparams() method
|
||||||
|
# _nchannels -- the number of audio channels
|
||||||
|
# set through the setnchannels() or setparams() method
|
||||||
|
# _sampwidth -- the number of bytes per audio sample
|
||||||
|
# set through the setsampwidth() or setparams() method
|
||||||
|
# _framerate -- the sampling frequency
|
||||||
|
# set through the setframerate() or setparams() method
|
||||||
|
# _nframes -- the number of audio frames written to the header
|
||||||
|
# set through the setnframes() or setparams() method
|
||||||
|
# _aifc -- whether we're writing an AIFF-C file or an AIFF file
|
||||||
|
# set through the aifc() method, reset through the
|
||||||
|
# aiff() method
|
||||||
|
#
|
||||||
|
# These variables are used internally only:
|
||||||
|
# _version -- the AIFF-C version number
|
||||||
|
# _comp -- the compressor from builtin module cl
|
||||||
|
# _nframeswritten -- the number of audio frames actually written
|
||||||
|
# _datalength -- the size of the audio samples written to the header
|
||||||
|
# _datawritten -- the size of the audio samples actually written
|
||||||
|
|
||||||
|
_file = None # Set here since __del__ checks it
|
||||||
|
|
||||||
|
def __init__(self, f):
|
||||||
|
if isinstance(f, str):
|
||||||
|
file_object = builtins.open(f, 'wb')
|
||||||
|
try:
|
||||||
|
self.initfp(file_object)
|
||||||
|
except:
|
||||||
|
file_object.close()
|
||||||
|
raise
|
||||||
|
|
||||||
|
# treat .aiff file extensions as non-compressed audio
|
||||||
|
if f.endswith('.aiff'):
|
||||||
|
self._aifc = 0
|
||||||
|
else:
|
||||||
|
# assume it is an open file object already
|
||||||
|
self.initfp(f)
|
||||||
|
|
||||||
|
def initfp(self, file):
|
||||||
|
self._file = file
|
||||||
|
self._version = _AIFC_version
|
||||||
|
self._comptype = b'NONE'
|
||||||
|
self._compname = b'not compressed'
|
||||||
|
self._convert = None
|
||||||
|
self._nchannels = 0
|
||||||
|
self._sampwidth = 0
|
||||||
|
self._framerate = 0
|
||||||
|
self._nframes = 0
|
||||||
|
self._nframeswritten = 0
|
||||||
|
self._datawritten = 0
|
||||||
|
self._datalength = 0
|
||||||
|
self._markers = []
|
||||||
|
self._marklength = 0
|
||||||
|
self._aifc = 1 # AIFF-C is default
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
#
|
||||||
|
# User visible methods.
|
||||||
|
#
|
||||||
|
def aiff(self):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
self._aifc = 0
|
||||||
|
|
||||||
|
def aifc(self):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
self._aifc = 1
|
||||||
|
|
||||||
|
def setnchannels(self, nchannels):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
if nchannels < 1:
|
||||||
|
raise Error('bad # of channels')
|
||||||
|
self._nchannels = nchannels
|
||||||
|
|
||||||
|
def getnchannels(self):
|
||||||
|
if not self._nchannels:
|
||||||
|
raise Error('number of channels not set')
|
||||||
|
return self._nchannels
|
||||||
|
|
||||||
|
def setsampwidth(self, sampwidth):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
if sampwidth < 1 or sampwidth > 4:
|
||||||
|
raise Error('bad sample width')
|
||||||
|
self._sampwidth = sampwidth
|
||||||
|
|
||||||
|
def getsampwidth(self):
|
||||||
|
if not self._sampwidth:
|
||||||
|
raise Error('sample width not set')
|
||||||
|
return self._sampwidth
|
||||||
|
|
||||||
|
def setframerate(self, framerate):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
if framerate <= 0:
|
||||||
|
raise Error('bad frame rate')
|
||||||
|
self._framerate = framerate
|
||||||
|
|
||||||
|
def getframerate(self):
|
||||||
|
if not self._framerate:
|
||||||
|
raise Error('frame rate not set')
|
||||||
|
return self._framerate
|
||||||
|
|
||||||
|
def setnframes(self, nframes):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
self._nframes = nframes
|
||||||
|
|
||||||
|
def getnframes(self):
|
||||||
|
return self._nframeswritten
|
||||||
|
|
||||||
|
def setcomptype(self, comptype, compname):
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
if comptype not in (b'NONE', b'ulaw', b'ULAW',
|
||||||
|
b'alaw', b'ALAW', b'G722'):
|
||||||
|
raise Error('unsupported compression type')
|
||||||
|
self._comptype = comptype
|
||||||
|
self._compname = compname
|
||||||
|
|
||||||
|
def getcomptype(self):
|
||||||
|
return self._comptype
|
||||||
|
|
||||||
|
def getcompname(self):
|
||||||
|
return self._compname
|
||||||
|
|
||||||
|
## def setversion(self, version):
|
||||||
|
## if self._nframeswritten:
|
||||||
|
## raise Error, 'cannot change parameters after starting to write'
|
||||||
|
## self._version = version
|
||||||
|
|
||||||
|
def setparams(self, params):
|
||||||
|
nchannels, sampwidth, framerate, nframes, comptype, compname = params
|
||||||
|
if self._nframeswritten:
|
||||||
|
raise Error('cannot change parameters after starting to write')
|
||||||
|
if comptype not in (b'NONE', b'ulaw', b'ULAW',
|
||||||
|
b'alaw', b'ALAW', b'G722'):
|
||||||
|
raise Error('unsupported compression type')
|
||||||
|
self.setnchannels(nchannels)
|
||||||
|
self.setsampwidth(sampwidth)
|
||||||
|
self.setframerate(framerate)
|
||||||
|
self.setnframes(nframes)
|
||||||
|
self.setcomptype(comptype, compname)
|
||||||
|
|
||||||
|
def getparams(self):
|
||||||
|
if not self._nchannels or not self._sampwidth or not self._framerate:
|
||||||
|
raise Error('not all parameters set')
|
||||||
|
return _aifc_params(self._nchannels, self._sampwidth, self._framerate,
|
||||||
|
self._nframes, self._comptype, self._compname)
|
||||||
|
|
||||||
|
def setmark(self, id, pos, name):
|
||||||
|
if id <= 0:
|
||||||
|
raise Error('marker ID must be > 0')
|
||||||
|
if pos < 0:
|
||||||
|
raise Error('marker position must be >= 0')
|
||||||
|
if not isinstance(name, bytes):
|
||||||
|
raise Error('marker name must be bytes')
|
||||||
|
for i in range(len(self._markers)):
|
||||||
|
if id == self._markers[i][0]:
|
||||||
|
self._markers[i] = id, pos, name
|
||||||
|
return
|
||||||
|
self._markers.append((id, pos, name))
|
||||||
|
|
||||||
|
def getmark(self, id):
|
||||||
|
for marker in self._markers:
|
||||||
|
if id == marker[0]:
|
||||||
|
return marker
|
||||||
|
raise Error('marker {0!r} does not exist'.format(id))
|
||||||
|
|
||||||
|
def getmarkers(self):
|
||||||
|
if len(self._markers) == 0:
|
||||||
|
return None
|
||||||
|
return self._markers
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
return self._nframeswritten
|
||||||
|
|
||||||
|
def writeframesraw(self, data):
|
||||||
|
if not isinstance(data, (bytes, bytearray)):
|
||||||
|
data = memoryview(data).cast('B')
|
||||||
|
self._ensure_header_written(len(data))
|
||||||
|
nframes = len(data) // (self._sampwidth * self._nchannels)
|
||||||
|
if self._convert:
|
||||||
|
data = self._convert(data)
|
||||||
|
self._file.write(data)
|
||||||
|
self._nframeswritten = self._nframeswritten + nframes
|
||||||
|
self._datawritten = self._datawritten + len(data)
|
||||||
|
|
||||||
|
def writeframes(self, data):
|
||||||
|
self.writeframesraw(data)
|
||||||
|
if self._nframeswritten != self._nframes or \
|
||||||
|
self._datalength != self._datawritten:
|
||||||
|
self._patchheader()
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self._file is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self._ensure_header_written(0)
|
||||||
|
if self._datawritten & 1:
|
||||||
|
# quick pad to even size
|
||||||
|
self._file.write(b'\x00')
|
||||||
|
self._datawritten = self._datawritten + 1
|
||||||
|
self._writemarkers()
|
||||||
|
if self._nframeswritten != self._nframes or \
|
||||||
|
self._datalength != self._datawritten or \
|
||||||
|
self._marklength:
|
||||||
|
self._patchheader()
|
||||||
|
finally:
|
||||||
|
# Prevent ref cycles
|
||||||
|
self._convert = None
|
||||||
|
f = self._file
|
||||||
|
self._file = None
|
||||||
|
f.close()
|
||||||
|
|
||||||
|
#
|
||||||
|
# Internal methods.
|
||||||
|
#
|
||||||
|
|
||||||
|
def _lin2alaw(self, data):
|
||||||
|
import audioop
|
||||||
|
return audioop.lin2alaw(data, 2)
|
||||||
|
|
||||||
|
def _lin2ulaw(self, data):
|
||||||
|
import audioop
|
||||||
|
return audioop.lin2ulaw(data, 2)
|
||||||
|
|
||||||
|
def _lin2adpcm(self, data):
|
||||||
|
import audioop
|
||||||
|
if not hasattr(self, '_adpcmstate'):
|
||||||
|
self._adpcmstate = None
|
||||||
|
data, self._adpcmstate = audioop.lin2adpcm(data, 2, self._adpcmstate)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def _ensure_header_written(self, datasize):
|
||||||
|
if not self._nframeswritten:
|
||||||
|
if self._comptype in (b'ULAW', b'ulaw', b'ALAW', b'alaw', b'G722'):
|
||||||
|
if not self._sampwidth:
|
||||||
|
self._sampwidth = 2
|
||||||
|
if self._sampwidth != 2:
|
||||||
|
raise Error('sample width must be 2 when compressing '
|
||||||
|
'with ulaw/ULAW, alaw/ALAW or G7.22 (ADPCM)')
|
||||||
|
if not self._nchannels:
|
||||||
|
raise Error('# channels not specified')
|
||||||
|
if not self._sampwidth:
|
||||||
|
raise Error('sample width not specified')
|
||||||
|
if not self._framerate:
|
||||||
|
raise Error('sampling rate not specified')
|
||||||
|
self._write_header(datasize)
|
||||||
|
|
||||||
|
def _init_compression(self):
|
||||||
|
if self._comptype == b'G722':
|
||||||
|
self._convert = self._lin2adpcm
|
||||||
|
elif self._comptype in (b'ulaw', b'ULAW'):
|
||||||
|
self._convert = self._lin2ulaw
|
||||||
|
elif self._comptype in (b'alaw', b'ALAW'):
|
||||||
|
self._convert = self._lin2alaw
|
||||||
|
|
||||||
|
def _write_header(self, initlength):
|
||||||
|
if self._aifc and self._comptype != b'NONE':
|
||||||
|
self._init_compression()
|
||||||
|
self._file.write(b'FORM')
|
||||||
|
if not self._nframes:
|
||||||
|
self._nframes = initlength // (self._nchannels * self._sampwidth)
|
||||||
|
self._datalength = self._nframes * self._nchannels * self._sampwidth
|
||||||
|
if self._datalength & 1:
|
||||||
|
self._datalength = self._datalength + 1
|
||||||
|
if self._aifc:
|
||||||
|
if self._comptype in (b'ulaw', b'ULAW', b'alaw', b'ALAW'):
|
||||||
|
self._datalength = self._datalength // 2
|
||||||
|
if self._datalength & 1:
|
||||||
|
self._datalength = self._datalength + 1
|
||||||
|
elif self._comptype == b'G722':
|
||||||
|
self._datalength = (self._datalength + 3) // 4
|
||||||
|
if self._datalength & 1:
|
||||||
|
self._datalength = self._datalength + 1
|
||||||
|
try:
|
||||||
|
self._form_length_pos = self._file.tell()
|
||||||
|
except (AttributeError, OSError):
|
||||||
|
self._form_length_pos = None
|
||||||
|
commlength = self._write_form_length(self._datalength)
|
||||||
|
if self._aifc:
|
||||||
|
self._file.write(b'AIFC')
|
||||||
|
self._file.write(b'FVER')
|
||||||
|
_write_ulong(self._file, 4)
|
||||||
|
_write_ulong(self._file, self._version)
|
||||||
|
else:
|
||||||
|
self._file.write(b'AIFF')
|
||||||
|
self._file.write(b'COMM')
|
||||||
|
_write_ulong(self._file, commlength)
|
||||||
|
_write_short(self._file, self._nchannels)
|
||||||
|
if self._form_length_pos is not None:
|
||||||
|
self._nframes_pos = self._file.tell()
|
||||||
|
_write_ulong(self._file, self._nframes)
|
||||||
|
if self._comptype in (b'ULAW', b'ulaw', b'ALAW', b'alaw', b'G722'):
|
||||||
|
_write_short(self._file, 8)
|
||||||
|
else:
|
||||||
|
_write_short(self._file, self._sampwidth * 8)
|
||||||
|
_write_float(self._file, self._framerate)
|
||||||
|
if self._aifc:
|
||||||
|
self._file.write(self._comptype)
|
||||||
|
_write_string(self._file, self._compname)
|
||||||
|
self._file.write(b'SSND')
|
||||||
|
if self._form_length_pos is not None:
|
||||||
|
self._ssnd_length_pos = self._file.tell()
|
||||||
|
_write_ulong(self._file, self._datalength + 8)
|
||||||
|
_write_ulong(self._file, 0)
|
||||||
|
_write_ulong(self._file, 0)
|
||||||
|
|
||||||
|
def _write_form_length(self, datalength):
|
||||||
|
if self._aifc:
|
||||||
|
commlength = 18 + 5 + len(self._compname)
|
||||||
|
if commlength & 1:
|
||||||
|
commlength = commlength + 1
|
||||||
|
verslength = 12
|
||||||
|
else:
|
||||||
|
commlength = 18
|
||||||
|
verslength = 0
|
||||||
|
_write_ulong(self._file, 4 + verslength + self._marklength + \
|
||||||
|
8 + commlength + 16 + datalength)
|
||||||
|
return commlength
|
||||||
|
|
||||||
|
def _patchheader(self):
|
||||||
|
curpos = self._file.tell()
|
||||||
|
if self._datawritten & 1:
|
||||||
|
datalength = self._datawritten + 1
|
||||||
|
self._file.write(b'\x00')
|
||||||
|
else:
|
||||||
|
datalength = self._datawritten
|
||||||
|
if datalength == self._datalength and \
|
||||||
|
self._nframes == self._nframeswritten and \
|
||||||
|
self._marklength == 0:
|
||||||
|
self._file.seek(curpos, 0)
|
||||||
|
return
|
||||||
|
self._file.seek(self._form_length_pos, 0)
|
||||||
|
dummy = self._write_form_length(datalength)
|
||||||
|
self._file.seek(self._nframes_pos, 0)
|
||||||
|
_write_ulong(self._file, self._nframeswritten)
|
||||||
|
self._file.seek(self._ssnd_length_pos, 0)
|
||||||
|
_write_ulong(self._file, datalength + 8)
|
||||||
|
self._file.seek(curpos, 0)
|
||||||
|
self._nframes = self._nframeswritten
|
||||||
|
self._datalength = datalength
|
||||||
|
|
||||||
|
def _writemarkers(self):
|
||||||
|
if len(self._markers) == 0:
|
||||||
|
return
|
||||||
|
self._file.write(b'MARK')
|
||||||
|
length = 2
|
||||||
|
for marker in self._markers:
|
||||||
|
id, pos, name = marker
|
||||||
|
length = length + len(name) + 1 + 6
|
||||||
|
if len(name) & 1 == 0:
|
||||||
|
length = length + 1
|
||||||
|
_write_ulong(self._file, length)
|
||||||
|
self._marklength = length + 8
|
||||||
|
_write_short(self._file, len(self._markers))
|
||||||
|
for marker in self._markers:
|
||||||
|
id, pos, name = marker
|
||||||
|
_write_short(self._file, id)
|
||||||
|
_write_ulong(self._file, pos)
|
||||||
|
_write_string(self._file, name)
|
||||||
|
|
||||||
|
def open(f, mode=None):
|
||||||
|
if mode is None:
|
||||||
|
if hasattr(f, 'mode'):
|
||||||
|
mode = f.mode
|
||||||
|
else:
|
||||||
|
mode = 'rb'
|
||||||
|
if mode in ('r', 'rb'):
|
||||||
|
return Aifc_read(f)
|
||||||
|
elif mode in ('w', 'wb'):
|
||||||
|
return Aifc_write(f)
|
||||||
|
else:
|
||||||
|
raise Error("mode must be 'r', 'rb', 'w', or 'wb'")
|
||||||
|
|
||||||
|
def openfp(f, mode=None):
|
||||||
|
warnings.warn("aifc.openfp is deprecated since Python 3.7. "
|
||||||
|
"Use aifc.open instead.", DeprecationWarning, stacklevel=2)
|
||||||
|
return open(f, mode=mode)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
import sys
|
||||||
|
if not sys.argv[1:]:
|
||||||
|
sys.argv.append('/usr/demos/data/audio/bach.aiff')
|
||||||
|
fn = sys.argv[1]
|
||||||
|
with open(fn, 'r') as f:
|
||||||
|
print("Reading", fn)
|
||||||
|
print("nchannels =", f.getnchannels())
|
||||||
|
print("nframes =", f.getnframes())
|
||||||
|
print("sampwidth =", f.getsampwidth())
|
||||||
|
print("framerate =", f.getframerate())
|
||||||
|
print("comptype =", f.getcomptype())
|
||||||
|
print("compname =", f.getcompname())
|
||||||
|
if sys.argv[2:]:
|
||||||
|
gn = sys.argv[2]
|
||||||
|
print("Writing", gn)
|
||||||
|
with open(gn, 'w') as g:
|
||||||
|
g.setparams(f.getparams())
|
||||||
|
while 1:
|
||||||
|
data = f.readframes(1024)
|
||||||
|
if not data:
|
||||||
|
break
|
||||||
|
g.writeframes(data)
|
||||||
|
print("Done.")
|
17
Lib/antigravity.py
Normal file
17
Lib/antigravity.py
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
|
||||||
|
import webbrowser
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
webbrowser.open("https://xkcd.com/353/")
|
||||||
|
|
||||||
|
def geohash(latitude, longitude, datedow):
|
||||||
|
'''Compute geohash() using the Munroe algorithm.
|
||||||
|
|
||||||
|
>>> geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
|
||||||
|
37.857713 -122.544543
|
||||||
|
|
||||||
|
'''
|
||||||
|
# https://xkcd.com/426/
|
||||||
|
h = hashlib.md5(datedow).hexdigest()
|
||||||
|
p, q = [('%f' % float.fromhex('0.' + x)) for x in (h[:16], h[16:32])]
|
||||||
|
print('%d%s %d%s' % (latitude, p[1:], longitude, q[1:]))
|
2501
Lib/argparse.py
Normal file
2501
Lib/argparse.py
Normal file
File diff suppressed because it is too large
Load diff
331
Lib/ast.py
Normal file
331
Lib/ast.py
Normal file
|
@ -0,0 +1,331 @@
|
||||||
|
"""
|
||||||
|
ast
|
||||||
|
~~~
|
||||||
|
|
||||||
|
The `ast` module helps Python applications to process trees of the Python
|
||||||
|
abstract syntax grammar. The abstract syntax itself might change with
|
||||||
|
each Python release; this module helps to find out programmatically what
|
||||||
|
the current grammar looks like and allows modifications of it.
|
||||||
|
|
||||||
|
An abstract syntax tree can be generated by passing `ast.PyCF_ONLY_AST` as
|
||||||
|
a flag to the `compile()` builtin function or by using the `parse()`
|
||||||
|
function from this module. The result will be a tree of objects whose
|
||||||
|
classes all inherit from `ast.AST`.
|
||||||
|
|
||||||
|
A modified abstract syntax tree can be compiled into a Python code object
|
||||||
|
using the built-in `compile()` function.
|
||||||
|
|
||||||
|
Additionally various helper functions are provided that make working with
|
||||||
|
the trees simpler. The main intention of the helper functions and this
|
||||||
|
module in general is to provide an easy to use interface for libraries
|
||||||
|
that work tightly with the python syntax (template engines for example).
|
||||||
|
|
||||||
|
|
||||||
|
:copyright: Copyright 2008 by Armin Ronacher.
|
||||||
|
:license: Python License.
|
||||||
|
"""
|
||||||
|
from _ast import *
|
||||||
|
|
||||||
|
|
||||||
|
def parse(source, filename='<unknown>', mode='exec'):
|
||||||
|
"""
|
||||||
|
Parse the source into an AST node.
|
||||||
|
Equivalent to compile(source, filename, mode, PyCF_ONLY_AST).
|
||||||
|
"""
|
||||||
|
return compile(source, filename, mode, PyCF_ONLY_AST)
|
||||||
|
|
||||||
|
|
||||||
|
def literal_eval(node_or_string):
|
||||||
|
"""
|
||||||
|
Safely evaluate an expression node or a string containing a Python
|
||||||
|
expression. The string or node provided may only consist of the following
|
||||||
|
Python literal structures: strings, bytes, numbers, tuples, lists, dicts,
|
||||||
|
sets, booleans, and None.
|
||||||
|
"""
|
||||||
|
if isinstance(node_or_string, str):
|
||||||
|
node_or_string = parse(node_or_string, mode='eval')
|
||||||
|
if isinstance(node_or_string, Expression):
|
||||||
|
node_or_string = node_or_string.body
|
||||||
|
def _convert_num(node):
|
||||||
|
if isinstance(node, Constant):
|
||||||
|
if isinstance(node.value, (int, float, complex)):
|
||||||
|
return node.value
|
||||||
|
elif isinstance(node, Num):
|
||||||
|
return node.n
|
||||||
|
raise ValueError('malformed node or string: ' + repr(node))
|
||||||
|
def _convert_signed_num(node):
|
||||||
|
if isinstance(node, UnaryOp) and isinstance(node.op, (UAdd, USub)):
|
||||||
|
operand = _convert_num(node.operand)
|
||||||
|
if isinstance(node.op, UAdd):
|
||||||
|
return + operand
|
||||||
|
else:
|
||||||
|
return - operand
|
||||||
|
return _convert_num(node)
|
||||||
|
def _convert(node):
|
||||||
|
if isinstance(node, Constant):
|
||||||
|
return node.value
|
||||||
|
elif isinstance(node, (Str, Bytes)):
|
||||||
|
return node.s
|
||||||
|
elif isinstance(node, Num):
|
||||||
|
return node.n
|
||||||
|
elif isinstance(node, Tuple):
|
||||||
|
return tuple(map(_convert, node.elts))
|
||||||
|
elif isinstance(node, List):
|
||||||
|
return list(map(_convert, node.elts))
|
||||||
|
elif isinstance(node, Set):
|
||||||
|
return set(map(_convert, node.elts))
|
||||||
|
elif isinstance(node, Dict):
|
||||||
|
return dict(zip(map(_convert, node.keys),
|
||||||
|
map(_convert, node.values)))
|
||||||
|
elif isinstance(node, NameConstant):
|
||||||
|
return node.value
|
||||||
|
elif isinstance(node, BinOp) and isinstance(node.op, (Add, Sub)):
|
||||||
|
left = _convert_signed_num(node.left)
|
||||||
|
right = _convert_num(node.right)
|
||||||
|
if isinstance(left, (int, float)) and isinstance(right, complex):
|
||||||
|
if isinstance(node.op, Add):
|
||||||
|
return left + right
|
||||||
|
else:
|
||||||
|
return left - right
|
||||||
|
return _convert_signed_num(node)
|
||||||
|
return _convert(node_or_string)
|
||||||
|
|
||||||
|
|
||||||
|
def dump(node, annotate_fields=True, include_attributes=False):
|
||||||
|
"""
|
||||||
|
Return a formatted dump of the tree in *node*. This is mainly useful for
|
||||||
|
debugging purposes. The returned string will show the names and the values
|
||||||
|
for fields. This makes the code impossible to evaluate, so if evaluation is
|
||||||
|
wanted *annotate_fields* must be set to False. Attributes such as line
|
||||||
|
numbers and column offsets are not dumped by default. If this is wanted,
|
||||||
|
*include_attributes* can be set to True.
|
||||||
|
"""
|
||||||
|
def _format(node):
|
||||||
|
if isinstance(node, AST):
|
||||||
|
fields = [(a, _format(b)) for a, b in iter_fields(node)]
|
||||||
|
rv = '%s(%s' % (node.__class__.__name__, ', '.join(
|
||||||
|
('%s=%s' % field for field in fields)
|
||||||
|
if annotate_fields else
|
||||||
|
(b for a, b in fields)
|
||||||
|
))
|
||||||
|
if include_attributes and node._attributes:
|
||||||
|
rv += fields and ', ' or ' '
|
||||||
|
rv += ', '.join('%s=%s' % (a, _format(getattr(node, a)))
|
||||||
|
for a in node._attributes)
|
||||||
|
return rv + ')'
|
||||||
|
elif isinstance(node, list):
|
||||||
|
return '[%s]' % ', '.join(_format(x) for x in node)
|
||||||
|
return repr(node)
|
||||||
|
if not isinstance(node, AST):
|
||||||
|
raise TypeError('expected AST, got %r' % node.__class__.__name__)
|
||||||
|
return _format(node)
|
||||||
|
|
||||||
|
|
||||||
|
def copy_location(new_node, old_node):
|
||||||
|
"""
|
||||||
|
Copy source location (`lineno` and `col_offset` attributes) from
|
||||||
|
*old_node* to *new_node* if possible, and return *new_node*.
|
||||||
|
"""
|
||||||
|
for attr in 'lineno', 'col_offset':
|
||||||
|
if attr in old_node._attributes and attr in new_node._attributes \
|
||||||
|
and hasattr(old_node, attr):
|
||||||
|
setattr(new_node, attr, getattr(old_node, attr))
|
||||||
|
return new_node
|
||||||
|
|
||||||
|
|
||||||
|
def fix_missing_locations(node):
|
||||||
|
"""
|
||||||
|
When you compile a node tree with compile(), the compiler expects lineno and
|
||||||
|
col_offset attributes for every node that supports them. This is rather
|
||||||
|
tedious to fill in for generated nodes, so this helper adds these attributes
|
||||||
|
recursively where not already set, by setting them to the values of the
|
||||||
|
parent node. It works recursively starting at *node*.
|
||||||
|
"""
|
||||||
|
def _fix(node, lineno, col_offset):
|
||||||
|
if 'lineno' in node._attributes:
|
||||||
|
if not hasattr(node, 'lineno'):
|
||||||
|
node.lineno = lineno
|
||||||
|
else:
|
||||||
|
lineno = node.lineno
|
||||||
|
if 'col_offset' in node._attributes:
|
||||||
|
if not hasattr(node, 'col_offset'):
|
||||||
|
node.col_offset = col_offset
|
||||||
|
else:
|
||||||
|
col_offset = node.col_offset
|
||||||
|
for child in iter_child_nodes(node):
|
||||||
|
_fix(child, lineno, col_offset)
|
||||||
|
_fix(node, 1, 0)
|
||||||
|
return node
|
||||||
|
|
||||||
|
|
||||||
|
def increment_lineno(node, n=1):
|
||||||
|
"""
|
||||||
|
Increment the line number of each node in the tree starting at *node* by *n*.
|
||||||
|
This is useful to "move code" to a different location in a file.
|
||||||
|
"""
|
||||||
|
for child in walk(node):
|
||||||
|
if 'lineno' in child._attributes:
|
||||||
|
child.lineno = getattr(child, 'lineno', 0) + n
|
||||||
|
return node
|
||||||
|
|
||||||
|
|
||||||
|
def iter_fields(node):
|
||||||
|
"""
|
||||||
|
Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields``
|
||||||
|
that is present on *node*.
|
||||||
|
"""
|
||||||
|
for field in node._fields:
|
||||||
|
try:
|
||||||
|
yield field, getattr(node, field)
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def iter_child_nodes(node):
|
||||||
|
"""
|
||||||
|
Yield all direct child nodes of *node*, that is, all fields that are nodes
|
||||||
|
and all items of fields that are lists of nodes.
|
||||||
|
"""
|
||||||
|
for name, field in iter_fields(node):
|
||||||
|
if isinstance(field, AST):
|
||||||
|
yield field
|
||||||
|
elif isinstance(field, list):
|
||||||
|
for item in field:
|
||||||
|
if isinstance(item, AST):
|
||||||
|
yield item
|
||||||
|
|
||||||
|
|
||||||
|
def get_docstring(node, clean=True):
|
||||||
|
"""
|
||||||
|
Return the docstring for the given node or None if no docstring can
|
||||||
|
be found. If the node provided does not have docstrings a TypeError
|
||||||
|
will be raised.
|
||||||
|
|
||||||
|
If *clean* is `True`, all tabs are expanded to spaces and any whitespace
|
||||||
|
that can be uniformly removed from the second line onwards is removed.
|
||||||
|
"""
|
||||||
|
if not isinstance(node, (AsyncFunctionDef, FunctionDef, ClassDef, Module)):
|
||||||
|
raise TypeError("%r can't have docstrings" % node.__class__.__name__)
|
||||||
|
if not(node.body and isinstance(node.body[0], Expr)):
|
||||||
|
return None
|
||||||
|
node = node.body[0].value
|
||||||
|
if isinstance(node, Str):
|
||||||
|
text = node.s
|
||||||
|
elif isinstance(node, Constant) and isinstance(node.value, str):
|
||||||
|
text = node.value
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
if clean:
|
||||||
|
import inspect
|
||||||
|
text = inspect.cleandoc(text)
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def walk(node):
|
||||||
|
"""
|
||||||
|
Recursively yield all descendant nodes in the tree starting at *node*
|
||||||
|
(including *node* itself), in no specified order. This is useful if you
|
||||||
|
only want to modify nodes in place and don't care about the context.
|
||||||
|
"""
|
||||||
|
from collections import deque
|
||||||
|
todo = deque([node])
|
||||||
|
while todo:
|
||||||
|
node = todo.popleft()
|
||||||
|
todo.extend(iter_child_nodes(node))
|
||||||
|
yield node
|
||||||
|
|
||||||
|
|
||||||
|
class NodeVisitor(object):
|
||||||
|
"""
|
||||||
|
A node visitor base class that walks the abstract syntax tree and calls a
|
||||||
|
visitor function for every node found. This function may return a value
|
||||||
|
which is forwarded by the `visit` method.
|
||||||
|
|
||||||
|
This class is meant to be subclassed, with the subclass adding visitor
|
||||||
|
methods.
|
||||||
|
|
||||||
|
Per default the visitor functions for the nodes are ``'visit_'`` +
|
||||||
|
class name of the node. So a `TryFinally` node visit function would
|
||||||
|
be `visit_TryFinally`. This behavior can be changed by overriding
|
||||||
|
the `visit` method. If no visitor function exists for a node
|
||||||
|
(return value `None`) the `generic_visit` visitor is used instead.
|
||||||
|
|
||||||
|
Don't use the `NodeVisitor` if you want to apply changes to nodes during
|
||||||
|
traversing. For this a special visitor exists (`NodeTransformer`) that
|
||||||
|
allows modifications.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def visit(self, node):
|
||||||
|
"""Visit a node."""
|
||||||
|
method = 'visit_' + node.__class__.__name__
|
||||||
|
visitor = getattr(self, method, self.generic_visit)
|
||||||
|
return visitor(node)
|
||||||
|
|
||||||
|
def generic_visit(self, node):
|
||||||
|
"""Called if no explicit visitor function exists for a node."""
|
||||||
|
for field, value in iter_fields(node):
|
||||||
|
if isinstance(value, list):
|
||||||
|
for item in value:
|
||||||
|
if isinstance(item, AST):
|
||||||
|
self.visit(item)
|
||||||
|
elif isinstance(value, AST):
|
||||||
|
self.visit(value)
|
||||||
|
|
||||||
|
|
||||||
|
class NodeTransformer(NodeVisitor):
|
||||||
|
"""
|
||||||
|
A :class:`NodeVisitor` subclass that walks the abstract syntax tree and
|
||||||
|
allows modification of nodes.
|
||||||
|
|
||||||
|
The `NodeTransformer` will walk the AST and use the return value of the
|
||||||
|
visitor methods to replace or remove the old node. If the return value of
|
||||||
|
the visitor method is ``None``, the node will be removed from its location,
|
||||||
|
otherwise it is replaced with the return value. The return value may be the
|
||||||
|
original node in which case no replacement takes place.
|
||||||
|
|
||||||
|
Here is an example transformer that rewrites all occurrences of name lookups
|
||||||
|
(``foo``) to ``data['foo']``::
|
||||||
|
|
||||||
|
class RewriteName(NodeTransformer):
|
||||||
|
|
||||||
|
def visit_Name(self, node):
|
||||||
|
return copy_location(Subscript(
|
||||||
|
value=Name(id='data', ctx=Load()),
|
||||||
|
slice=Index(value=Str(s=node.id)),
|
||||||
|
ctx=node.ctx
|
||||||
|
), node)
|
||||||
|
|
||||||
|
Keep in mind that if the node you're operating on has child nodes you must
|
||||||
|
either transform the child nodes yourself or call the :meth:`generic_visit`
|
||||||
|
method for the node first.
|
||||||
|
|
||||||
|
For nodes that were part of a collection of statements (that applies to all
|
||||||
|
statement nodes), the visitor may also return a list of nodes rather than
|
||||||
|
just a single node.
|
||||||
|
|
||||||
|
Usually you use the transformer like this::
|
||||||
|
|
||||||
|
node = YourTransformer().visit(node)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def generic_visit(self, node):
|
||||||
|
for field, old_value in iter_fields(node):
|
||||||
|
if isinstance(old_value, list):
|
||||||
|
new_values = []
|
||||||
|
for value in old_value:
|
||||||
|
if isinstance(value, AST):
|
||||||
|
value = self.visit(value)
|
||||||
|
if value is None:
|
||||||
|
continue
|
||||||
|
elif not isinstance(value, AST):
|
||||||
|
new_values.extend(value)
|
||||||
|
continue
|
||||||
|
new_values.append(value)
|
||||||
|
old_value[:] = new_values
|
||||||
|
elif isinstance(old_value, AST):
|
||||||
|
new_node = self.visit(old_value)
|
||||||
|
if new_node is None:
|
||||||
|
delattr(node, field)
|
||||||
|
else:
|
||||||
|
setattr(node, field, new_node)
|
||||||
|
return node
|
307
Lib/asynchat.py
Normal file
307
Lib/asynchat.py
Normal file
|
@ -0,0 +1,307 @@
|
||||||
|
# -*- Mode: Python; tab-width: 4 -*-
|
||||||
|
# Id: asynchat.py,v 2.26 2000/09/07 22:29:26 rushing Exp
|
||||||
|
# Author: Sam Rushing <rushing@nightmare.com>
|
||||||
|
|
||||||
|
# ======================================================================
|
||||||
|
# Copyright 1996 by Sam Rushing
|
||||||
|
#
|
||||||
|
# All Rights Reserved
|
||||||
|
#
|
||||||
|
# Permission to use, copy, modify, and distribute this software and
|
||||||
|
# its documentation for any purpose and without fee is hereby
|
||||||
|
# granted, provided that the above copyright notice appear in all
|
||||||
|
# copies and that both that copyright notice and this permission
|
||||||
|
# notice appear in supporting documentation, and that the name of Sam
|
||||||
|
# Rushing not be used in advertising or publicity pertaining to
|
||||||
|
# distribution of the software without specific, written prior
|
||||||
|
# permission.
|
||||||
|
#
|
||||||
|
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||||
|
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||||
|
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||||
|
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||||
|
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||||
|
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||||
|
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
# ======================================================================
|
||||||
|
|
||||||
|
r"""A class supporting chat-style (command/response) protocols.
|
||||||
|
|
||||||
|
This class adds support for 'chat' style protocols - where one side
|
||||||
|
sends a 'command', and the other sends a response (examples would be
|
||||||
|
the common internet protocols - smtp, nntp, ftp, etc..).
|
||||||
|
|
||||||
|
The handle_read() method looks at the input stream for the current
|
||||||
|
'terminator' (usually '\r\n' for single-line responses, '\r\n.\r\n'
|
||||||
|
for multi-line output), calling self.found_terminator() on its
|
||||||
|
receipt.
|
||||||
|
|
||||||
|
for example:
|
||||||
|
Say you build an async nntp client using this class. At the start
|
||||||
|
of the connection, you'll have self.terminator set to '\r\n', in
|
||||||
|
order to process the single-line greeting. Just before issuing a
|
||||||
|
'LIST' command you'll set it to '\r\n.\r\n'. The output of the LIST
|
||||||
|
command will be accumulated (using your own 'collect_incoming_data'
|
||||||
|
method) up to the terminator, and then control will be returned to
|
||||||
|
you - by calling your self.found_terminator() method.
|
||||||
|
"""
|
||||||
|
import asyncore
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
|
||||||
|
class async_chat(asyncore.dispatcher):
|
||||||
|
"""This is an abstract class. You must derive from this class, and add
|
||||||
|
the two methods collect_incoming_data() and found_terminator()"""
|
||||||
|
|
||||||
|
# these are overridable defaults
|
||||||
|
|
||||||
|
ac_in_buffer_size = 65536
|
||||||
|
ac_out_buffer_size = 65536
|
||||||
|
|
||||||
|
# we don't want to enable the use of encoding by default, because that is a
|
||||||
|
# sign of an application bug that we don't want to pass silently
|
||||||
|
|
||||||
|
use_encoding = 0
|
||||||
|
encoding = 'latin-1'
|
||||||
|
|
||||||
|
def __init__(self, sock=None, map=None):
|
||||||
|
# for string terminator matching
|
||||||
|
self.ac_in_buffer = b''
|
||||||
|
|
||||||
|
# we use a list here rather than io.BytesIO for a few reasons...
|
||||||
|
# del lst[:] is faster than bio.truncate(0)
|
||||||
|
# lst = [] is faster than bio.truncate(0)
|
||||||
|
self.incoming = []
|
||||||
|
|
||||||
|
# we toss the use of the "simple producer" and replace it with
|
||||||
|
# a pure deque, which the original fifo was a wrapping of
|
||||||
|
self.producer_fifo = deque()
|
||||||
|
asyncore.dispatcher.__init__(self, sock, map)
|
||||||
|
|
||||||
|
def collect_incoming_data(self, data):
|
||||||
|
raise NotImplementedError("must be implemented in subclass")
|
||||||
|
|
||||||
|
def _collect_incoming_data(self, data):
|
||||||
|
self.incoming.append(data)
|
||||||
|
|
||||||
|
def _get_data(self):
|
||||||
|
d = b''.join(self.incoming)
|
||||||
|
del self.incoming[:]
|
||||||
|
return d
|
||||||
|
|
||||||
|
def found_terminator(self):
|
||||||
|
raise NotImplementedError("must be implemented in subclass")
|
||||||
|
|
||||||
|
def set_terminator(self, term):
|
||||||
|
"""Set the input delimiter.
|
||||||
|
|
||||||
|
Can be a fixed string of any length, an integer, or None.
|
||||||
|
"""
|
||||||
|
if isinstance(term, str) and self.use_encoding:
|
||||||
|
term = bytes(term, self.encoding)
|
||||||
|
elif isinstance(term, int) and term < 0:
|
||||||
|
raise ValueError('the number of received bytes must be positive')
|
||||||
|
self.terminator = term
|
||||||
|
|
||||||
|
def get_terminator(self):
|
||||||
|
return self.terminator
|
||||||
|
|
||||||
|
# grab some more data from the socket,
|
||||||
|
# throw it to the collector method,
|
||||||
|
# check for the terminator,
|
||||||
|
# if found, transition to the next state.
|
||||||
|
|
||||||
|
def handle_read(self):
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = self.recv(self.ac_in_buffer_size)
|
||||||
|
except BlockingIOError:
|
||||||
|
return
|
||||||
|
except OSError as why:
|
||||||
|
self.handle_error()
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(data, str) and self.use_encoding:
|
||||||
|
data = bytes(str, self.encoding)
|
||||||
|
self.ac_in_buffer = self.ac_in_buffer + data
|
||||||
|
|
||||||
|
# Continue to search for self.terminator in self.ac_in_buffer,
|
||||||
|
# while calling self.collect_incoming_data. The while loop
|
||||||
|
# is necessary because we might read several data+terminator
|
||||||
|
# combos with a single recv(4096).
|
||||||
|
|
||||||
|
while self.ac_in_buffer:
|
||||||
|
lb = len(self.ac_in_buffer)
|
||||||
|
terminator = self.get_terminator()
|
||||||
|
if not terminator:
|
||||||
|
# no terminator, collect it all
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer)
|
||||||
|
self.ac_in_buffer = b''
|
||||||
|
elif isinstance(terminator, int):
|
||||||
|
# numeric terminator
|
||||||
|
n = terminator
|
||||||
|
if lb < n:
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer)
|
||||||
|
self.ac_in_buffer = b''
|
||||||
|
self.terminator = self.terminator - lb
|
||||||
|
else:
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer[:n])
|
||||||
|
self.ac_in_buffer = self.ac_in_buffer[n:]
|
||||||
|
self.terminator = 0
|
||||||
|
self.found_terminator()
|
||||||
|
else:
|
||||||
|
# 3 cases:
|
||||||
|
# 1) end of buffer matches terminator exactly:
|
||||||
|
# collect data, transition
|
||||||
|
# 2) end of buffer matches some prefix:
|
||||||
|
# collect data to the prefix
|
||||||
|
# 3) end of buffer does not match any prefix:
|
||||||
|
# collect data
|
||||||
|
terminator_len = len(terminator)
|
||||||
|
index = self.ac_in_buffer.find(terminator)
|
||||||
|
if index != -1:
|
||||||
|
# we found the terminator
|
||||||
|
if index > 0:
|
||||||
|
# don't bother reporting the empty string
|
||||||
|
# (source of subtle bugs)
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer[:index])
|
||||||
|
self.ac_in_buffer = self.ac_in_buffer[index+terminator_len:]
|
||||||
|
# This does the Right Thing if the terminator
|
||||||
|
# is changed here.
|
||||||
|
self.found_terminator()
|
||||||
|
else:
|
||||||
|
# check for a prefix of the terminator
|
||||||
|
index = find_prefix_at_end(self.ac_in_buffer, terminator)
|
||||||
|
if index:
|
||||||
|
if index != lb:
|
||||||
|
# we found a prefix, collect up to the prefix
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer[:-index])
|
||||||
|
self.ac_in_buffer = self.ac_in_buffer[-index:]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
# no prefix, collect it all
|
||||||
|
self.collect_incoming_data(self.ac_in_buffer)
|
||||||
|
self.ac_in_buffer = b''
|
||||||
|
|
||||||
|
def handle_write(self):
|
||||||
|
self.initiate_send()
|
||||||
|
|
||||||
|
def handle_close(self):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def push(self, data):
|
||||||
|
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||||
|
raise TypeError('data argument must be byte-ish (%r)',
|
||||||
|
type(data))
|
||||||
|
sabs = self.ac_out_buffer_size
|
||||||
|
if len(data) > sabs:
|
||||||
|
for i in range(0, len(data), sabs):
|
||||||
|
self.producer_fifo.append(data[i:i+sabs])
|
||||||
|
else:
|
||||||
|
self.producer_fifo.append(data)
|
||||||
|
self.initiate_send()
|
||||||
|
|
||||||
|
def push_with_producer(self, producer):
|
||||||
|
self.producer_fifo.append(producer)
|
||||||
|
self.initiate_send()
|
||||||
|
|
||||||
|
def readable(self):
|
||||||
|
"predicate for inclusion in the readable for select()"
|
||||||
|
# cannot use the old predicate, it violates the claim of the
|
||||||
|
# set_terminator method.
|
||||||
|
|
||||||
|
# return (len(self.ac_in_buffer) <= self.ac_in_buffer_size)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
def writable(self):
|
||||||
|
"predicate for inclusion in the writable for select()"
|
||||||
|
return self.producer_fifo or (not self.connected)
|
||||||
|
|
||||||
|
def close_when_done(self):
|
||||||
|
"automatically close this channel once the outgoing queue is empty"
|
||||||
|
self.producer_fifo.append(None)
|
||||||
|
|
||||||
|
def initiate_send(self):
|
||||||
|
while self.producer_fifo and self.connected:
|
||||||
|
first = self.producer_fifo[0]
|
||||||
|
# handle empty string/buffer or None entry
|
||||||
|
if not first:
|
||||||
|
del self.producer_fifo[0]
|
||||||
|
if first is None:
|
||||||
|
self.handle_close()
|
||||||
|
return
|
||||||
|
|
||||||
|
# handle classic producer behavior
|
||||||
|
obs = self.ac_out_buffer_size
|
||||||
|
try:
|
||||||
|
data = first[:obs]
|
||||||
|
except TypeError:
|
||||||
|
data = first.more()
|
||||||
|
if data:
|
||||||
|
self.producer_fifo.appendleft(data)
|
||||||
|
else:
|
||||||
|
del self.producer_fifo[0]
|
||||||
|
continue
|
||||||
|
|
||||||
|
if isinstance(data, str) and self.use_encoding:
|
||||||
|
data = bytes(data, self.encoding)
|
||||||
|
|
||||||
|
# send the data
|
||||||
|
try:
|
||||||
|
num_sent = self.send(data)
|
||||||
|
except OSError:
|
||||||
|
self.handle_error()
|
||||||
|
return
|
||||||
|
|
||||||
|
if num_sent:
|
||||||
|
if num_sent < len(data) or obs < len(first):
|
||||||
|
self.producer_fifo[0] = first[num_sent:]
|
||||||
|
else:
|
||||||
|
del self.producer_fifo[0]
|
||||||
|
# we tried to send some actual data
|
||||||
|
return
|
||||||
|
|
||||||
|
def discard_buffers(self):
|
||||||
|
# Emergencies only!
|
||||||
|
self.ac_in_buffer = b''
|
||||||
|
del self.incoming[:]
|
||||||
|
self.producer_fifo.clear()
|
||||||
|
|
||||||
|
|
||||||
|
class simple_producer:
|
||||||
|
|
||||||
|
def __init__(self, data, buffer_size=512):
|
||||||
|
self.data = data
|
||||||
|
self.buffer_size = buffer_size
|
||||||
|
|
||||||
|
def more(self):
|
||||||
|
if len(self.data) > self.buffer_size:
|
||||||
|
result = self.data[:self.buffer_size]
|
||||||
|
self.data = self.data[self.buffer_size:]
|
||||||
|
return result
|
||||||
|
else:
|
||||||
|
result = self.data
|
||||||
|
self.data = b''
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# Given 'haystack', see if any prefix of 'needle' is at its end. This
|
||||||
|
# assumes an exact match has already been checked. Return the number of
|
||||||
|
# characters matched.
|
||||||
|
# for example:
|
||||||
|
# f_p_a_e("qwerty\r", "\r\n") => 1
|
||||||
|
# f_p_a_e("qwertydkjf", "\r\n") => 0
|
||||||
|
# f_p_a_e("qwerty\r\n", "\r\n") => <undefined>
|
||||||
|
|
||||||
|
# this could maybe be made faster with a computed regex?
|
||||||
|
# [answer: no; circa Python-2.0, Jan 2001]
|
||||||
|
# new python: 28961/s
|
||||||
|
# old python: 18307/s
|
||||||
|
# re: 12820/s
|
||||||
|
# regex: 14035/s
|
||||||
|
|
||||||
|
def find_prefix_at_end(haystack, needle):
|
||||||
|
l = len(needle) - 1
|
||||||
|
while l and not haystack.endswith(needle[:l]):
|
||||||
|
l -= 1
|
||||||
|
return l
|
43
Lib/asyncio/__init__.py
Normal file
43
Lib/asyncio/__init__.py
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
"""The asyncio package, tracking PEP 3156."""
|
||||||
|
|
||||||
|
# flake8: noqa
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# This relies on each of the submodules having an __all__ variable.
|
||||||
|
from .base_events import *
|
||||||
|
from .coroutines import *
|
||||||
|
from .events import *
|
||||||
|
from .futures import *
|
||||||
|
from .locks import *
|
||||||
|
from .protocols import *
|
||||||
|
from .runners import *
|
||||||
|
from .queues import *
|
||||||
|
from .streams import *
|
||||||
|
from .subprocess import *
|
||||||
|
from .tasks import *
|
||||||
|
from .transports import *
|
||||||
|
|
||||||
|
# Exposed for _asynciomodule.c to implement now deprecated
|
||||||
|
# Task.all_tasks() method. This function will be removed in 3.9.
|
||||||
|
from .tasks import _all_tasks_compat # NoQA
|
||||||
|
|
||||||
|
__all__ = (base_events.__all__ +
|
||||||
|
coroutines.__all__ +
|
||||||
|
events.__all__ +
|
||||||
|
futures.__all__ +
|
||||||
|
locks.__all__ +
|
||||||
|
protocols.__all__ +
|
||||||
|
runners.__all__ +
|
||||||
|
queues.__all__ +
|
||||||
|
streams.__all__ +
|
||||||
|
subprocess.__all__ +
|
||||||
|
tasks.__all__ +
|
||||||
|
transports.__all__)
|
||||||
|
|
||||||
|
if sys.platform == 'win32': # pragma: no cover
|
||||||
|
from .windows_events import *
|
||||||
|
__all__ += windows_events.__all__
|
||||||
|
else:
|
||||||
|
from .unix_events import * # pragma: no cover
|
||||||
|
__all__ += unix_events.__all__
|
1800
Lib/asyncio/base_events.py
Normal file
1800
Lib/asyncio/base_events.py
Normal file
File diff suppressed because it is too large
Load diff
71
Lib/asyncio/base_futures.py
Normal file
71
Lib/asyncio/base_futures.py
Normal file
|
@ -0,0 +1,71 @@
|
||||||
|
__all__ = ()
|
||||||
|
|
||||||
|
import concurrent.futures._base
|
||||||
|
import reprlib
|
||||||
|
|
||||||
|
from . import format_helpers
|
||||||
|
|
||||||
|
Error = concurrent.futures._base.Error
|
||||||
|
CancelledError = concurrent.futures.CancelledError
|
||||||
|
TimeoutError = concurrent.futures.TimeoutError
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidStateError(Error):
|
||||||
|
"""The operation is not allowed in this state."""
|
||||||
|
|
||||||
|
|
||||||
|
# States for Future.
|
||||||
|
_PENDING = 'PENDING'
|
||||||
|
_CANCELLED = 'CANCELLED'
|
||||||
|
_FINISHED = 'FINISHED'
|
||||||
|
|
||||||
|
|
||||||
|
def isfuture(obj):
|
||||||
|
"""Check for a Future.
|
||||||
|
|
||||||
|
This returns True when obj is a Future instance or is advertising
|
||||||
|
itself as duck-type compatible by setting _asyncio_future_blocking.
|
||||||
|
See comment in Future for more details.
|
||||||
|
"""
|
||||||
|
return (hasattr(obj.__class__, '_asyncio_future_blocking') and
|
||||||
|
obj._asyncio_future_blocking is not None)
|
||||||
|
|
||||||
|
|
||||||
|
def _format_callbacks(cb):
|
||||||
|
"""helper function for Future.__repr__"""
|
||||||
|
size = len(cb)
|
||||||
|
if not size:
|
||||||
|
cb = ''
|
||||||
|
|
||||||
|
def format_cb(callback):
|
||||||
|
return format_helpers._format_callback_source(callback, ())
|
||||||
|
|
||||||
|
if size == 1:
|
||||||
|
cb = format_cb(cb[0][0])
|
||||||
|
elif size == 2:
|
||||||
|
cb = '{}, {}'.format(format_cb(cb[0][0]), format_cb(cb[1][0]))
|
||||||
|
elif size > 2:
|
||||||
|
cb = '{}, <{} more>, {}'.format(format_cb(cb[0][0]),
|
||||||
|
size - 2,
|
||||||
|
format_cb(cb[-1][0]))
|
||||||
|
return f'cb=[{cb}]'
|
||||||
|
|
||||||
|
|
||||||
|
def _future_repr_info(future):
|
||||||
|
# (Future) -> str
|
||||||
|
"""helper function for Future.__repr__"""
|
||||||
|
info = [future._state.lower()]
|
||||||
|
if future._state == _FINISHED:
|
||||||
|
if future._exception is not None:
|
||||||
|
info.append(f'exception={future._exception!r}')
|
||||||
|
else:
|
||||||
|
# use reprlib to limit the length of the output, especially
|
||||||
|
# for very long strings
|
||||||
|
result = reprlib.repr(future._result)
|
||||||
|
info.append(f'result={result}')
|
||||||
|
if future._callbacks:
|
||||||
|
info.append(_format_callbacks(future._callbacks))
|
||||||
|
if future._source_traceback:
|
||||||
|
frame = future._source_traceback[-1]
|
||||||
|
info.append(f'created at {frame[0]}:{frame[1]}')
|
||||||
|
return info
|
284
Lib/asyncio/base_subprocess.py
Normal file
284
Lib/asyncio/base_subprocess.py
Normal file
|
@ -0,0 +1,284 @@
|
||||||
|
import collections
|
||||||
|
import subprocess
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from . import protocols
|
||||||
|
from . import transports
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
class BaseSubprocessTransport(transports.SubprocessTransport):
|
||||||
|
|
||||||
|
def __init__(self, loop, protocol, args, shell,
|
||||||
|
stdin, stdout, stderr, bufsize,
|
||||||
|
waiter=None, extra=None, **kwargs):
|
||||||
|
super().__init__(extra)
|
||||||
|
self._closed = False
|
||||||
|
self._protocol = protocol
|
||||||
|
self._loop = loop
|
||||||
|
self._proc = None
|
||||||
|
self._pid = None
|
||||||
|
self._returncode = None
|
||||||
|
self._exit_waiters = []
|
||||||
|
self._pending_calls = collections.deque()
|
||||||
|
self._pipes = {}
|
||||||
|
self._finished = False
|
||||||
|
|
||||||
|
if stdin == subprocess.PIPE:
|
||||||
|
self._pipes[0] = None
|
||||||
|
if stdout == subprocess.PIPE:
|
||||||
|
self._pipes[1] = None
|
||||||
|
if stderr == subprocess.PIPE:
|
||||||
|
self._pipes[2] = None
|
||||||
|
|
||||||
|
# Create the child process: set the _proc attribute
|
||||||
|
try:
|
||||||
|
self._start(args=args, shell=shell, stdin=stdin, stdout=stdout,
|
||||||
|
stderr=stderr, bufsize=bufsize, **kwargs)
|
||||||
|
except:
|
||||||
|
self.close()
|
||||||
|
raise
|
||||||
|
|
||||||
|
self._pid = self._proc.pid
|
||||||
|
self._extra['subprocess'] = self._proc
|
||||||
|
|
||||||
|
if self._loop.get_debug():
|
||||||
|
if isinstance(args, (bytes, str)):
|
||||||
|
program = args
|
||||||
|
else:
|
||||||
|
program = args[0]
|
||||||
|
logger.debug('process %r created: pid %s',
|
||||||
|
program, self._pid)
|
||||||
|
|
||||||
|
self._loop.create_task(self._connect_pipes(waiter))
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
info = [self.__class__.__name__]
|
||||||
|
if self._closed:
|
||||||
|
info.append('closed')
|
||||||
|
if self._pid is not None:
|
||||||
|
info.append(f'pid={self._pid}')
|
||||||
|
if self._returncode is not None:
|
||||||
|
info.append(f'returncode={self._returncode}')
|
||||||
|
elif self._pid is not None:
|
||||||
|
info.append('running')
|
||||||
|
else:
|
||||||
|
info.append('not started')
|
||||||
|
|
||||||
|
stdin = self._pipes.get(0)
|
||||||
|
if stdin is not None:
|
||||||
|
info.append(f'stdin={stdin.pipe}')
|
||||||
|
|
||||||
|
stdout = self._pipes.get(1)
|
||||||
|
stderr = self._pipes.get(2)
|
||||||
|
if stdout is not None and stderr is stdout:
|
||||||
|
info.append(f'stdout=stderr={stdout.pipe}')
|
||||||
|
else:
|
||||||
|
if stdout is not None:
|
||||||
|
info.append(f'stdout={stdout.pipe}')
|
||||||
|
if stderr is not None:
|
||||||
|
info.append(f'stderr={stderr.pipe}')
|
||||||
|
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_protocol(self, protocol):
|
||||||
|
self._protocol = protocol
|
||||||
|
|
||||||
|
def get_protocol(self):
|
||||||
|
return self._protocol
|
||||||
|
|
||||||
|
def is_closing(self):
|
||||||
|
return self._closed
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self._closed:
|
||||||
|
return
|
||||||
|
self._closed = True
|
||||||
|
|
||||||
|
for proto in self._pipes.values():
|
||||||
|
if proto is None:
|
||||||
|
continue
|
||||||
|
proto.pipe.close()
|
||||||
|
|
||||||
|
if (self._proc is not None and
|
||||||
|
# has the child process finished?
|
||||||
|
self._returncode is None and
|
||||||
|
# the child process has finished, but the
|
||||||
|
# transport hasn't been notified yet?
|
||||||
|
self._proc.poll() is None):
|
||||||
|
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.warning('Close running child process: kill %r', self)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._proc.kill()
|
||||||
|
except ProcessLookupError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Don't clear the _proc reference yet: _post_init() may still run
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if not self._closed:
|
||||||
|
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
|
||||||
|
source=self)
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def get_pid(self):
|
||||||
|
return self._pid
|
||||||
|
|
||||||
|
def get_returncode(self):
|
||||||
|
return self._returncode
|
||||||
|
|
||||||
|
def get_pipe_transport(self, fd):
|
||||||
|
if fd in self._pipes:
|
||||||
|
return self._pipes[fd].pipe
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _check_proc(self):
|
||||||
|
if self._proc is None:
|
||||||
|
raise ProcessLookupError()
|
||||||
|
|
||||||
|
def send_signal(self, signal):
|
||||||
|
self._check_proc()
|
||||||
|
self._proc.send_signal(signal)
|
||||||
|
|
||||||
|
def terminate(self):
|
||||||
|
self._check_proc()
|
||||||
|
self._proc.terminate()
|
||||||
|
|
||||||
|
def kill(self):
|
||||||
|
self._check_proc()
|
||||||
|
self._proc.kill()
|
||||||
|
|
||||||
|
async def _connect_pipes(self, waiter):
|
||||||
|
try:
|
||||||
|
proc = self._proc
|
||||||
|
loop = self._loop
|
||||||
|
|
||||||
|
if proc.stdin is not None:
|
||||||
|
_, pipe = await loop.connect_write_pipe(
|
||||||
|
lambda: WriteSubprocessPipeProto(self, 0),
|
||||||
|
proc.stdin)
|
||||||
|
self._pipes[0] = pipe
|
||||||
|
|
||||||
|
if proc.stdout is not None:
|
||||||
|
_, pipe = await loop.connect_read_pipe(
|
||||||
|
lambda: ReadSubprocessPipeProto(self, 1),
|
||||||
|
proc.stdout)
|
||||||
|
self._pipes[1] = pipe
|
||||||
|
|
||||||
|
if proc.stderr is not None:
|
||||||
|
_, pipe = await loop.connect_read_pipe(
|
||||||
|
lambda: ReadSubprocessPipeProto(self, 2),
|
||||||
|
proc.stderr)
|
||||||
|
self._pipes[2] = pipe
|
||||||
|
|
||||||
|
assert self._pending_calls is not None
|
||||||
|
|
||||||
|
loop.call_soon(self._protocol.connection_made, self)
|
||||||
|
for callback, data in self._pending_calls:
|
||||||
|
loop.call_soon(callback, *data)
|
||||||
|
self._pending_calls = None
|
||||||
|
except Exception as exc:
|
||||||
|
if waiter is not None and not waiter.cancelled():
|
||||||
|
waiter.set_exception(exc)
|
||||||
|
else:
|
||||||
|
if waiter is not None and not waiter.cancelled():
|
||||||
|
waiter.set_result(None)
|
||||||
|
|
||||||
|
def _call(self, cb, *data):
|
||||||
|
if self._pending_calls is not None:
|
||||||
|
self._pending_calls.append((cb, data))
|
||||||
|
else:
|
||||||
|
self._loop.call_soon(cb, *data)
|
||||||
|
|
||||||
|
def _pipe_connection_lost(self, fd, exc):
|
||||||
|
self._call(self._protocol.pipe_connection_lost, fd, exc)
|
||||||
|
self._try_finish()
|
||||||
|
|
||||||
|
def _pipe_data_received(self, fd, data):
|
||||||
|
self._call(self._protocol.pipe_data_received, fd, data)
|
||||||
|
|
||||||
|
def _process_exited(self, returncode):
|
||||||
|
assert returncode is not None, returncode
|
||||||
|
assert self._returncode is None, self._returncode
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.info('%r exited with return code %r', self, returncode)
|
||||||
|
self._returncode = returncode
|
||||||
|
if self._proc.returncode is None:
|
||||||
|
# asyncio uses a child watcher: copy the status into the Popen
|
||||||
|
# object. On Python 3.6, it is required to avoid a ResourceWarning.
|
||||||
|
self._proc.returncode = returncode
|
||||||
|
self._call(self._protocol.process_exited)
|
||||||
|
self._try_finish()
|
||||||
|
|
||||||
|
# wake up futures waiting for wait()
|
||||||
|
for waiter in self._exit_waiters:
|
||||||
|
if not waiter.cancelled():
|
||||||
|
waiter.set_result(returncode)
|
||||||
|
self._exit_waiters = None
|
||||||
|
|
||||||
|
async def _wait(self):
|
||||||
|
"""Wait until the process exit and return the process return code.
|
||||||
|
|
||||||
|
This method is a coroutine."""
|
||||||
|
if self._returncode is not None:
|
||||||
|
return self._returncode
|
||||||
|
|
||||||
|
waiter = self._loop.create_future()
|
||||||
|
self._exit_waiters.append(waiter)
|
||||||
|
return await waiter
|
||||||
|
|
||||||
|
def _try_finish(self):
|
||||||
|
assert not self._finished
|
||||||
|
if self._returncode is None:
|
||||||
|
return
|
||||||
|
if all(p is not None and p.disconnected
|
||||||
|
for p in self._pipes.values()):
|
||||||
|
self._finished = True
|
||||||
|
self._call(self._call_connection_lost, None)
|
||||||
|
|
||||||
|
def _call_connection_lost(self, exc):
|
||||||
|
try:
|
||||||
|
self._protocol.connection_lost(exc)
|
||||||
|
finally:
|
||||||
|
self._loop = None
|
||||||
|
self._proc = None
|
||||||
|
self._protocol = None
|
||||||
|
|
||||||
|
|
||||||
|
class WriteSubprocessPipeProto(protocols.BaseProtocol):
|
||||||
|
|
||||||
|
def __init__(self, proc, fd):
|
||||||
|
self.proc = proc
|
||||||
|
self.fd = fd
|
||||||
|
self.pipe = None
|
||||||
|
self.disconnected = False
|
||||||
|
|
||||||
|
def connection_made(self, transport):
|
||||||
|
self.pipe = transport
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f'<{self.__class__.__name__} fd={self.fd} pipe={self.pipe!r}>'
|
||||||
|
|
||||||
|
def connection_lost(self, exc):
|
||||||
|
self.disconnected = True
|
||||||
|
self.proc._pipe_connection_lost(self.fd, exc)
|
||||||
|
self.proc = None
|
||||||
|
|
||||||
|
def pause_writing(self):
|
||||||
|
self.proc._protocol.pause_writing()
|
||||||
|
|
||||||
|
def resume_writing(self):
|
||||||
|
self.proc._protocol.resume_writing()
|
||||||
|
|
||||||
|
|
||||||
|
class ReadSubprocessPipeProto(WriteSubprocessPipeProto,
|
||||||
|
protocols.Protocol):
|
||||||
|
|
||||||
|
def data_received(self, data):
|
||||||
|
self.proc._pipe_data_received(self.fd, data)
|
76
Lib/asyncio/base_tasks.py
Normal file
76
Lib/asyncio/base_tasks.py
Normal file
|
@ -0,0 +1,76 @@
|
||||||
|
import linecache
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
from . import base_futures
|
||||||
|
from . import coroutines
|
||||||
|
|
||||||
|
|
||||||
|
def _task_repr_info(task):
|
||||||
|
info = base_futures._future_repr_info(task)
|
||||||
|
|
||||||
|
if task._must_cancel:
|
||||||
|
# replace status
|
||||||
|
info[0] = 'cancelling'
|
||||||
|
|
||||||
|
coro = coroutines._format_coroutine(task._coro)
|
||||||
|
info.insert(1, f'coro=<{coro}>')
|
||||||
|
|
||||||
|
if task._fut_waiter is not None:
|
||||||
|
info.insert(2, f'wait_for={task._fut_waiter!r}')
|
||||||
|
return info
|
||||||
|
|
||||||
|
|
||||||
|
def _task_get_stack(task, limit):
|
||||||
|
frames = []
|
||||||
|
try:
|
||||||
|
# 'async def' coroutines
|
||||||
|
f = task._coro.cr_frame
|
||||||
|
except AttributeError:
|
||||||
|
f = task._coro.gi_frame
|
||||||
|
if f is not None:
|
||||||
|
while f is not None:
|
||||||
|
if limit is not None:
|
||||||
|
if limit <= 0:
|
||||||
|
break
|
||||||
|
limit -= 1
|
||||||
|
frames.append(f)
|
||||||
|
f = f.f_back
|
||||||
|
frames.reverse()
|
||||||
|
elif task._exception is not None:
|
||||||
|
tb = task._exception.__traceback__
|
||||||
|
while tb is not None:
|
||||||
|
if limit is not None:
|
||||||
|
if limit <= 0:
|
||||||
|
break
|
||||||
|
limit -= 1
|
||||||
|
frames.append(tb.tb_frame)
|
||||||
|
tb = tb.tb_next
|
||||||
|
return frames
|
||||||
|
|
||||||
|
|
||||||
|
def _task_print_stack(task, limit, file):
|
||||||
|
extracted_list = []
|
||||||
|
checked = set()
|
||||||
|
for f in task.get_stack(limit=limit):
|
||||||
|
lineno = f.f_lineno
|
||||||
|
co = f.f_code
|
||||||
|
filename = co.co_filename
|
||||||
|
name = co.co_name
|
||||||
|
if filename not in checked:
|
||||||
|
checked.add(filename)
|
||||||
|
linecache.checkcache(filename)
|
||||||
|
line = linecache.getline(filename, lineno, f.f_globals)
|
||||||
|
extracted_list.append((filename, lineno, name, line))
|
||||||
|
|
||||||
|
exc = task._exception
|
||||||
|
if not extracted_list:
|
||||||
|
print(f'No stack for {task!r}', file=file)
|
||||||
|
elif exc is not None:
|
||||||
|
print(f'Traceback for {task!r} (most recent call last):', file=file)
|
||||||
|
else:
|
||||||
|
print(f'Stack for {task!r} (most recent call last):', file=file)
|
||||||
|
|
||||||
|
traceback.print_list(extracted_list, file=file)
|
||||||
|
if exc is not None:
|
||||||
|
for line in traceback.format_exception_only(exc.__class__, exc):
|
||||||
|
print(line, file=file, end='')
|
27
Lib/asyncio/constants.py
Normal file
27
Lib/asyncio/constants.py
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
import enum
|
||||||
|
|
||||||
|
# After the connection is lost, log warnings after this many write()s.
|
||||||
|
LOG_THRESHOLD_FOR_CONNLOST_WRITES = 5
|
||||||
|
|
||||||
|
# Seconds to wait before retrying accept().
|
||||||
|
ACCEPT_RETRY_DELAY = 1
|
||||||
|
|
||||||
|
# Number of stack entries to capture in debug mode.
|
||||||
|
# The larger the number, the slower the operation in debug mode
|
||||||
|
# (see extract_stack() in format_helpers.py).
|
||||||
|
DEBUG_STACK_DEPTH = 10
|
||||||
|
|
||||||
|
# Number of seconds to wait for SSL handshake to complete
|
||||||
|
# The default timeout matches that of Nginx.
|
||||||
|
SSL_HANDSHAKE_TIMEOUT = 60.0
|
||||||
|
|
||||||
|
# Used in sendfile fallback code. We use fallback for platforms
|
||||||
|
# that don't support sendfile, or for TLS connections.
|
||||||
|
SENDFILE_FALLBACK_READBUFFER_SIZE = 1024 * 256
|
||||||
|
|
||||||
|
# The enum should be here to break circular dependencies between
|
||||||
|
# base_events and sslproto
|
||||||
|
class _SendfileMode(enum.Enum):
|
||||||
|
UNSUPPORTED = enum.auto()
|
||||||
|
TRY_NATIVE = enum.auto()
|
||||||
|
FALLBACK = enum.auto()
|
265
Lib/asyncio/coroutines.py
Normal file
265
Lib/asyncio/coroutines.py
Normal file
|
@ -0,0 +1,265 @@
|
||||||
|
__all__ = 'coroutine', 'iscoroutinefunction', 'iscoroutine'
|
||||||
|
|
||||||
|
import collections.abc
|
||||||
|
import functools
|
||||||
|
import inspect
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import traceback
|
||||||
|
import types
|
||||||
|
|
||||||
|
from . import base_futures
|
||||||
|
from . import constants
|
||||||
|
from . import format_helpers
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
def _is_debug_mode():
|
||||||
|
# If you set _DEBUG to true, @coroutine will wrap the resulting
|
||||||
|
# generator objects in a CoroWrapper instance (defined below). That
|
||||||
|
# instance will log a message when the generator is never iterated
|
||||||
|
# over, which may happen when you forget to use "await" or "yield from"
|
||||||
|
# with a coroutine call.
|
||||||
|
# Note that the value of the _DEBUG flag is taken
|
||||||
|
# when the decorator is used, so to be of any use it must be set
|
||||||
|
# before you define your coroutines. A downside of using this feature
|
||||||
|
# is that tracebacks show entries for the CoroWrapper.__next__ method
|
||||||
|
# when _DEBUG is true.
|
||||||
|
return sys.flags.dev_mode or (not sys.flags.ignore_environment and
|
||||||
|
bool(os.environ.get('PYTHONASYNCIODEBUG')))
|
||||||
|
|
||||||
|
|
||||||
|
_DEBUG = _is_debug_mode()
|
||||||
|
|
||||||
|
|
||||||
|
class CoroWrapper:
|
||||||
|
# Wrapper for coroutine object in _DEBUG mode.
|
||||||
|
|
||||||
|
def __init__(self, gen, func=None):
|
||||||
|
assert inspect.isgenerator(gen) or inspect.iscoroutine(gen), gen
|
||||||
|
self.gen = gen
|
||||||
|
self.func = func # Used to unwrap @coroutine decorator
|
||||||
|
self._source_traceback = format_helpers.extract_stack(sys._getframe(1))
|
||||||
|
self.__name__ = getattr(gen, '__name__', None)
|
||||||
|
self.__qualname__ = getattr(gen, '__qualname__', None)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
coro_repr = _format_coroutine(self)
|
||||||
|
if self._source_traceback:
|
||||||
|
frame = self._source_traceback[-1]
|
||||||
|
coro_repr += f', created at {frame[0]}:{frame[1]}'
|
||||||
|
|
||||||
|
return f'<{self.__class__.__name__} {coro_repr}>'
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __next__(self):
|
||||||
|
return self.gen.send(None)
|
||||||
|
|
||||||
|
def send(self, value):
|
||||||
|
return self.gen.send(value)
|
||||||
|
|
||||||
|
def throw(self, type, value=None, traceback=None):
|
||||||
|
return self.gen.throw(type, value, traceback)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
return self.gen.close()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gi_frame(self):
|
||||||
|
return self.gen.gi_frame
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gi_running(self):
|
||||||
|
return self.gen.gi_running
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gi_code(self):
|
||||||
|
return self.gen.gi_code
|
||||||
|
|
||||||
|
def __await__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gi_yieldfrom(self):
|
||||||
|
return self.gen.gi_yieldfrom
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
# Be careful accessing self.gen.frame -- self.gen might not exist.
|
||||||
|
gen = getattr(self, 'gen', None)
|
||||||
|
frame = getattr(gen, 'gi_frame', None)
|
||||||
|
if frame is not None and frame.f_lasti == -1:
|
||||||
|
msg = f'{self!r} was never yielded from'
|
||||||
|
tb = getattr(self, '_source_traceback', ())
|
||||||
|
if tb:
|
||||||
|
tb = ''.join(traceback.format_list(tb))
|
||||||
|
msg += (f'\nCoroutine object created at '
|
||||||
|
f'(most recent call last, truncated to '
|
||||||
|
f'{constants.DEBUG_STACK_DEPTH} last lines):\n')
|
||||||
|
msg += tb.rstrip()
|
||||||
|
logger.error(msg)
|
||||||
|
|
||||||
|
|
||||||
|
def coroutine(func):
|
||||||
|
"""Decorator to mark coroutines.
|
||||||
|
|
||||||
|
If the coroutine is not yielded from before it is destroyed,
|
||||||
|
an error message is logged.
|
||||||
|
"""
|
||||||
|
if inspect.iscoroutinefunction(func):
|
||||||
|
# In Python 3.5 that's all we need to do for coroutines
|
||||||
|
# defined with "async def".
|
||||||
|
return func
|
||||||
|
|
||||||
|
if inspect.isgeneratorfunction(func):
|
||||||
|
coro = func
|
||||||
|
else:
|
||||||
|
@functools.wraps(func)
|
||||||
|
def coro(*args, **kw):
|
||||||
|
res = func(*args, **kw)
|
||||||
|
if (base_futures.isfuture(res) or inspect.isgenerator(res) or
|
||||||
|
isinstance(res, CoroWrapper)):
|
||||||
|
res = yield from res
|
||||||
|
else:
|
||||||
|
# If 'res' is an awaitable, run it.
|
||||||
|
try:
|
||||||
|
await_meth = res.__await__
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
if isinstance(res, collections.abc.Awaitable):
|
||||||
|
res = yield from await_meth()
|
||||||
|
return res
|
||||||
|
|
||||||
|
coro = types.coroutine(coro)
|
||||||
|
if not _DEBUG:
|
||||||
|
wrapper = coro
|
||||||
|
else:
|
||||||
|
@functools.wraps(func)
|
||||||
|
def wrapper(*args, **kwds):
|
||||||
|
w = CoroWrapper(coro(*args, **kwds), func=func)
|
||||||
|
if w._source_traceback:
|
||||||
|
del w._source_traceback[-1]
|
||||||
|
# Python < 3.5 does not implement __qualname__
|
||||||
|
# on generator objects, so we set it manually.
|
||||||
|
# We use getattr as some callables (such as
|
||||||
|
# functools.partial may lack __qualname__).
|
||||||
|
w.__name__ = getattr(func, '__name__', None)
|
||||||
|
w.__qualname__ = getattr(func, '__qualname__', None)
|
||||||
|
return w
|
||||||
|
|
||||||
|
wrapper._is_coroutine = _is_coroutine # For iscoroutinefunction().
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
|
# A marker for iscoroutinefunction.
|
||||||
|
_is_coroutine = object()
|
||||||
|
|
||||||
|
|
||||||
|
def iscoroutinefunction(func):
|
||||||
|
"""Return True if func is a decorated coroutine function."""
|
||||||
|
return (inspect.iscoroutinefunction(func) or
|
||||||
|
getattr(func, '_is_coroutine', None) is _is_coroutine)
|
||||||
|
|
||||||
|
|
||||||
|
# Prioritize native coroutine check to speed-up
|
||||||
|
# asyncio.iscoroutine.
|
||||||
|
_COROUTINE_TYPES = (types.CoroutineType, types.GeneratorType,
|
||||||
|
collections.abc.Coroutine, CoroWrapper)
|
||||||
|
_iscoroutine_typecache = set()
|
||||||
|
|
||||||
|
|
||||||
|
def iscoroutine(obj):
|
||||||
|
"""Return True if obj is a coroutine object."""
|
||||||
|
if type(obj) in _iscoroutine_typecache:
|
||||||
|
return True
|
||||||
|
|
||||||
|
if isinstance(obj, _COROUTINE_TYPES):
|
||||||
|
# Just in case we don't want to cache more than 100
|
||||||
|
# positive types. That shouldn't ever happen, unless
|
||||||
|
# someone stressing the system on purpose.
|
||||||
|
if len(_iscoroutine_typecache) < 100:
|
||||||
|
_iscoroutine_typecache.add(type(obj))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _format_coroutine(coro):
|
||||||
|
assert iscoroutine(coro)
|
||||||
|
|
||||||
|
is_corowrapper = isinstance(coro, CoroWrapper)
|
||||||
|
|
||||||
|
def get_name(coro):
|
||||||
|
# Coroutines compiled with Cython sometimes don't have
|
||||||
|
# proper __qualname__ or __name__. While that is a bug
|
||||||
|
# in Cython, asyncio shouldn't crash with an AttributeError
|
||||||
|
# in its __repr__ functions.
|
||||||
|
if is_corowrapper:
|
||||||
|
return format_helpers._format_callback(coro.func, (), {})
|
||||||
|
|
||||||
|
if hasattr(coro, '__qualname__') and coro.__qualname__:
|
||||||
|
coro_name = coro.__qualname__
|
||||||
|
elif hasattr(coro, '__name__') and coro.__name__:
|
||||||
|
coro_name = coro.__name__
|
||||||
|
else:
|
||||||
|
# Stop masking Cython bugs, expose them in a friendly way.
|
||||||
|
coro_name = f'<{type(coro).__name__} without __name__>'
|
||||||
|
return f'{coro_name}()'
|
||||||
|
|
||||||
|
def is_running(coro):
|
||||||
|
try:
|
||||||
|
return coro.cr_running
|
||||||
|
except AttributeError:
|
||||||
|
try:
|
||||||
|
return coro.gi_running
|
||||||
|
except AttributeError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
coro_code = None
|
||||||
|
if hasattr(coro, 'cr_code') and coro.cr_code:
|
||||||
|
coro_code = coro.cr_code
|
||||||
|
elif hasattr(coro, 'gi_code') and coro.gi_code:
|
||||||
|
coro_code = coro.gi_code
|
||||||
|
|
||||||
|
coro_name = get_name(coro)
|
||||||
|
|
||||||
|
if not coro_code:
|
||||||
|
# Built-in types might not have __qualname__ or __name__.
|
||||||
|
if is_running(coro):
|
||||||
|
return f'{coro_name} running'
|
||||||
|
else:
|
||||||
|
return coro_name
|
||||||
|
|
||||||
|
coro_frame = None
|
||||||
|
if hasattr(coro, 'gi_frame') and coro.gi_frame:
|
||||||
|
coro_frame = coro.gi_frame
|
||||||
|
elif hasattr(coro, 'cr_frame') and coro.cr_frame:
|
||||||
|
coro_frame = coro.cr_frame
|
||||||
|
|
||||||
|
# If Cython's coroutine has a fake code object without proper
|
||||||
|
# co_filename -- expose that.
|
||||||
|
filename = coro_code.co_filename or '<empty co_filename>'
|
||||||
|
|
||||||
|
lineno = 0
|
||||||
|
if (is_corowrapper and
|
||||||
|
coro.func is not None and
|
||||||
|
not inspect.isgeneratorfunction(coro.func)):
|
||||||
|
source = format_helpers._get_function_source(coro.func)
|
||||||
|
if source is not None:
|
||||||
|
filename, lineno = source
|
||||||
|
if coro_frame is None:
|
||||||
|
coro_repr = f'{coro_name} done, defined at {filename}:{lineno}'
|
||||||
|
else:
|
||||||
|
coro_repr = f'{coro_name} running, defined at {filename}:{lineno}'
|
||||||
|
|
||||||
|
elif coro_frame is not None:
|
||||||
|
lineno = coro_frame.f_lineno
|
||||||
|
coro_repr = f'{coro_name} running at {filename}:{lineno}'
|
||||||
|
|
||||||
|
else:
|
||||||
|
lineno = coro_code.co_firstlineno
|
||||||
|
coro_repr = f'{coro_name} done, defined at {filename}:{lineno}'
|
||||||
|
|
||||||
|
return coro_repr
|
796
Lib/asyncio/events.py
Normal file
796
Lib/asyncio/events.py
Normal file
|
@ -0,0 +1,796 @@
|
||||||
|
"""Event loop and event loop policy."""
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'AbstractEventLoopPolicy',
|
||||||
|
'AbstractEventLoop', 'AbstractServer',
|
||||||
|
'Handle', 'TimerHandle', 'SendfileNotAvailableError',
|
||||||
|
'get_event_loop_policy', 'set_event_loop_policy',
|
||||||
|
'get_event_loop', 'set_event_loop', 'new_event_loop',
|
||||||
|
'get_child_watcher', 'set_child_watcher',
|
||||||
|
'_set_running_loop', 'get_running_loop',
|
||||||
|
'_get_running_loop',
|
||||||
|
)
|
||||||
|
|
||||||
|
import contextvars
|
||||||
|
import os
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import threading
|
||||||
|
|
||||||
|
from . import format_helpers
|
||||||
|
|
||||||
|
|
||||||
|
class SendfileNotAvailableError(RuntimeError):
|
||||||
|
"""Sendfile syscall is not available.
|
||||||
|
|
||||||
|
Raised if OS does not support sendfile syscall for given socket or
|
||||||
|
file type.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class Handle:
|
||||||
|
"""Object returned by callback registration methods."""
|
||||||
|
|
||||||
|
__slots__ = ('_callback', '_args', '_cancelled', '_loop',
|
||||||
|
'_source_traceback', '_repr', '__weakref__',
|
||||||
|
'_context')
|
||||||
|
|
||||||
|
def __init__(self, callback, args, loop, context=None):
|
||||||
|
if context is None:
|
||||||
|
context = contextvars.copy_context()
|
||||||
|
self._context = context
|
||||||
|
self._loop = loop
|
||||||
|
self._callback = callback
|
||||||
|
self._args = args
|
||||||
|
self._cancelled = False
|
||||||
|
self._repr = None
|
||||||
|
if self._loop.get_debug():
|
||||||
|
self._source_traceback = format_helpers.extract_stack(
|
||||||
|
sys._getframe(1))
|
||||||
|
else:
|
||||||
|
self._source_traceback = None
|
||||||
|
|
||||||
|
def _repr_info(self):
|
||||||
|
info = [self.__class__.__name__]
|
||||||
|
if self._cancelled:
|
||||||
|
info.append('cancelled')
|
||||||
|
if self._callback is not None:
|
||||||
|
info.append(format_helpers._format_callback_source(
|
||||||
|
self._callback, self._args))
|
||||||
|
if self._source_traceback:
|
||||||
|
frame = self._source_traceback[-1]
|
||||||
|
info.append(f'created at {frame[0]}:{frame[1]}')
|
||||||
|
return info
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self._repr is not None:
|
||||||
|
return self._repr
|
||||||
|
info = self._repr_info()
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
if not self._cancelled:
|
||||||
|
self._cancelled = True
|
||||||
|
if self._loop.get_debug():
|
||||||
|
# Keep a representation in debug mode to keep callback and
|
||||||
|
# parameters. For example, to log the warning
|
||||||
|
# "Executing <Handle...> took 2.5 second"
|
||||||
|
self._repr = repr(self)
|
||||||
|
self._callback = None
|
||||||
|
self._args = None
|
||||||
|
|
||||||
|
def cancelled(self):
|
||||||
|
return self._cancelled
|
||||||
|
|
||||||
|
def _run(self):
|
||||||
|
try:
|
||||||
|
self._context.run(self._callback, *self._args)
|
||||||
|
except Exception as exc:
|
||||||
|
cb = format_helpers._format_callback_source(
|
||||||
|
self._callback, self._args)
|
||||||
|
msg = f'Exception in callback {cb}'
|
||||||
|
context = {
|
||||||
|
'message': msg,
|
||||||
|
'exception': exc,
|
||||||
|
'handle': self,
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
self = None # Needed to break cycles when an exception occurs.
|
||||||
|
|
||||||
|
|
||||||
|
class TimerHandle(Handle):
|
||||||
|
"""Object returned by timed callback registration methods."""
|
||||||
|
|
||||||
|
__slots__ = ['_scheduled', '_when']
|
||||||
|
|
||||||
|
def __init__(self, when, callback, args, loop, context=None):
|
||||||
|
assert when is not None
|
||||||
|
super().__init__(callback, args, loop, context)
|
||||||
|
if self._source_traceback:
|
||||||
|
del self._source_traceback[-1]
|
||||||
|
self._when = when
|
||||||
|
self._scheduled = False
|
||||||
|
|
||||||
|
def _repr_info(self):
|
||||||
|
info = super()._repr_info()
|
||||||
|
pos = 2 if self._cancelled else 1
|
||||||
|
info.insert(pos, f'when={self._when}')
|
||||||
|
return info
|
||||||
|
|
||||||
|
def __hash__(self):
|
||||||
|
return hash(self._when)
|
||||||
|
|
||||||
|
def __lt__(self, other):
|
||||||
|
return self._when < other._when
|
||||||
|
|
||||||
|
def __le__(self, other):
|
||||||
|
if self._when < other._when:
|
||||||
|
return True
|
||||||
|
return self.__eq__(other)
|
||||||
|
|
||||||
|
def __gt__(self, other):
|
||||||
|
return self._when > other._when
|
||||||
|
|
||||||
|
def __ge__(self, other):
|
||||||
|
if self._when > other._when:
|
||||||
|
return True
|
||||||
|
return self.__eq__(other)
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
if isinstance(other, TimerHandle):
|
||||||
|
return (self._when == other._when and
|
||||||
|
self._callback == other._callback and
|
||||||
|
self._args == other._args and
|
||||||
|
self._cancelled == other._cancelled)
|
||||||
|
return NotImplemented
|
||||||
|
|
||||||
|
def __ne__(self, other):
|
||||||
|
equal = self.__eq__(other)
|
||||||
|
return NotImplemented if equal is NotImplemented else not equal
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
if not self._cancelled:
|
||||||
|
self._loop._timer_handle_cancelled(self)
|
||||||
|
super().cancel()
|
||||||
|
|
||||||
|
def when(self):
|
||||||
|
"""Return a scheduled callback time.
|
||||||
|
|
||||||
|
The time is an absolute timestamp, using the same time
|
||||||
|
reference as loop.time().
|
||||||
|
"""
|
||||||
|
return self._when
|
||||||
|
|
||||||
|
|
||||||
|
class AbstractServer:
|
||||||
|
"""Abstract server returned by create_server()."""
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""Stop serving. This leaves existing connections open."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_loop(self):
|
||||||
|
"""Get the event loop the Server object is attached to."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def is_serving(self):
|
||||||
|
"""Return True if the server is accepting connections."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def start_serving(self):
|
||||||
|
"""Start accepting connections.
|
||||||
|
|
||||||
|
This method is idempotent, so it can be called when
|
||||||
|
the server is already being serving.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def serve_forever(self):
|
||||||
|
"""Start accepting connections until the coroutine is cancelled.
|
||||||
|
|
||||||
|
The server is closed when the coroutine is cancelled.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def wait_closed(self):
|
||||||
|
"""Coroutine to wait until service is closed."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def __aenter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
async def __aexit__(self, *exc):
|
||||||
|
self.close()
|
||||||
|
await self.wait_closed()
|
||||||
|
|
||||||
|
|
||||||
|
class AbstractEventLoop:
|
||||||
|
"""Abstract event loop."""
|
||||||
|
|
||||||
|
# Running and stopping the event loop.
|
||||||
|
|
||||||
|
def run_forever(self):
|
||||||
|
"""Run the event loop until stop() is called."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def run_until_complete(self, future):
|
||||||
|
"""Run the event loop until a Future is done.
|
||||||
|
|
||||||
|
Return the Future's result, or raise its exception.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
"""Stop the event loop as soon as reasonable.
|
||||||
|
|
||||||
|
Exactly how soon that is may depend on the implementation, but
|
||||||
|
no more I/O callbacks should be scheduled.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def is_running(self):
|
||||||
|
"""Return whether the event loop is currently running."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def is_closed(self):
|
||||||
|
"""Returns True if the event loop was closed."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""Close the loop.
|
||||||
|
|
||||||
|
The loop should not be running.
|
||||||
|
|
||||||
|
This is idempotent and irreversible.
|
||||||
|
|
||||||
|
No other methods should be called after this one.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def shutdown_asyncgens(self):
|
||||||
|
"""Shutdown all active asynchronous generators."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Methods scheduling callbacks. All these return Handles.
|
||||||
|
|
||||||
|
def _timer_handle_cancelled(self, handle):
|
||||||
|
"""Notification that a TimerHandle has been cancelled."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def call_soon(self, callback, *args):
|
||||||
|
return self.call_later(0, callback, *args)
|
||||||
|
|
||||||
|
def call_later(self, delay, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def call_at(self, when, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def time(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def create_future(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Method scheduling a coroutine object: create a task.
|
||||||
|
|
||||||
|
def create_task(self, coro):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Methods for interacting with threads.
|
||||||
|
|
||||||
|
def call_soon_threadsafe(self, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def run_in_executor(self, executor, func, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_default_executor(self, executor):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Network I/O methods returning Futures.
|
||||||
|
|
||||||
|
async def getaddrinfo(self, host, port, *,
|
||||||
|
family=0, type=0, proto=0, flags=0):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def getnameinfo(self, sockaddr, flags=0):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def create_connection(
|
||||||
|
self, protocol_factory, host=None, port=None,
|
||||||
|
*, ssl=None, family=0, proto=0,
|
||||||
|
flags=0, sock=None, local_addr=None,
|
||||||
|
server_hostname=None,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def create_server(
|
||||||
|
self, protocol_factory, host=None, port=None,
|
||||||
|
*, family=socket.AF_UNSPEC,
|
||||||
|
flags=socket.AI_PASSIVE, sock=None, backlog=100,
|
||||||
|
ssl=None, reuse_address=None, reuse_port=None,
|
||||||
|
ssl_handshake_timeout=None,
|
||||||
|
start_serving=True):
|
||||||
|
"""A coroutine which creates a TCP server bound to host and port.
|
||||||
|
|
||||||
|
The return value is a Server object which can be used to stop
|
||||||
|
the service.
|
||||||
|
|
||||||
|
If host is an empty string or None all interfaces are assumed
|
||||||
|
and a list of multiple sockets will be returned (most likely
|
||||||
|
one for IPv4 and another one for IPv6). The host parameter can also be
|
||||||
|
a sequence (e.g. list) of hosts to bind to.
|
||||||
|
|
||||||
|
family can be set to either AF_INET or AF_INET6 to force the
|
||||||
|
socket to use IPv4 or IPv6. If not set it will be determined
|
||||||
|
from host (defaults to AF_UNSPEC).
|
||||||
|
|
||||||
|
flags is a bitmask for getaddrinfo().
|
||||||
|
|
||||||
|
sock can optionally be specified in order to use a preexisting
|
||||||
|
socket object.
|
||||||
|
|
||||||
|
backlog is the maximum number of queued connections passed to
|
||||||
|
listen() (defaults to 100).
|
||||||
|
|
||||||
|
ssl can be set to an SSLContext to enable SSL over the
|
||||||
|
accepted connections.
|
||||||
|
|
||||||
|
reuse_address tells the kernel to reuse a local socket in
|
||||||
|
TIME_WAIT state, without waiting for its natural timeout to
|
||||||
|
expire. If not specified will automatically be set to True on
|
||||||
|
UNIX.
|
||||||
|
|
||||||
|
reuse_port tells the kernel to allow this endpoint to be bound to
|
||||||
|
the same port as other existing endpoints are bound to, so long as
|
||||||
|
they all set this flag when being created. This option is not
|
||||||
|
supported on Windows.
|
||||||
|
|
||||||
|
ssl_handshake_timeout is the time in seconds that an SSL server
|
||||||
|
will wait for completion of the SSL handshake before aborting the
|
||||||
|
connection. Default is 60s.
|
||||||
|
|
||||||
|
start_serving set to True (default) causes the created server
|
||||||
|
to start accepting connections immediately. When set to False,
|
||||||
|
the user should await Server.start_serving() or Server.serve_forever()
|
||||||
|
to make the server to start accepting connections.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sendfile(self, transport, file, offset=0, count=None,
|
||||||
|
*, fallback=True):
|
||||||
|
"""Send a file through a transport.
|
||||||
|
|
||||||
|
Return an amount of sent bytes.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def start_tls(self, transport, protocol, sslcontext, *,
|
||||||
|
server_side=False,
|
||||||
|
server_hostname=None,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
"""Upgrade a transport to TLS.
|
||||||
|
|
||||||
|
Return a new transport that *protocol* should start using
|
||||||
|
immediately.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def create_unix_connection(
|
||||||
|
self, protocol_factory, path=None, *,
|
||||||
|
ssl=None, sock=None,
|
||||||
|
server_hostname=None,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def create_unix_server(
|
||||||
|
self, protocol_factory, path=None, *,
|
||||||
|
sock=None, backlog=100, ssl=None,
|
||||||
|
ssl_handshake_timeout=None,
|
||||||
|
start_serving=True):
|
||||||
|
"""A coroutine which creates a UNIX Domain Socket server.
|
||||||
|
|
||||||
|
The return value is a Server object, which can be used to stop
|
||||||
|
the service.
|
||||||
|
|
||||||
|
path is a str, representing a file systsem path to bind the
|
||||||
|
server socket to.
|
||||||
|
|
||||||
|
sock can optionally be specified in order to use a preexisting
|
||||||
|
socket object.
|
||||||
|
|
||||||
|
backlog is the maximum number of queued connections passed to
|
||||||
|
listen() (defaults to 100).
|
||||||
|
|
||||||
|
ssl can be set to an SSLContext to enable SSL over the
|
||||||
|
accepted connections.
|
||||||
|
|
||||||
|
ssl_handshake_timeout is the time in seconds that an SSL server
|
||||||
|
will wait for the SSL handshake to complete (defaults to 60s).
|
||||||
|
|
||||||
|
start_serving set to True (default) causes the created server
|
||||||
|
to start accepting connections immediately. When set to False,
|
||||||
|
the user should await Server.start_serving() or Server.serve_forever()
|
||||||
|
to make the server to start accepting connections.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def create_datagram_endpoint(self, protocol_factory,
|
||||||
|
local_addr=None, remote_addr=None, *,
|
||||||
|
family=0, proto=0, flags=0,
|
||||||
|
reuse_address=None, reuse_port=None,
|
||||||
|
allow_broadcast=None, sock=None):
|
||||||
|
"""A coroutine which creates a datagram endpoint.
|
||||||
|
|
||||||
|
This method will try to establish the endpoint in the background.
|
||||||
|
When successful, the coroutine returns a (transport, protocol) pair.
|
||||||
|
|
||||||
|
protocol_factory must be a callable returning a protocol instance.
|
||||||
|
|
||||||
|
socket family AF_INET, socket.AF_INET6 or socket.AF_UNIX depending on
|
||||||
|
host (or family if specified), socket type SOCK_DGRAM.
|
||||||
|
|
||||||
|
reuse_address tells the kernel to reuse a local socket in
|
||||||
|
TIME_WAIT state, without waiting for its natural timeout to
|
||||||
|
expire. If not specified it will automatically be set to True on
|
||||||
|
UNIX.
|
||||||
|
|
||||||
|
reuse_port tells the kernel to allow this endpoint to be bound to
|
||||||
|
the same port as other existing endpoints are bound to, so long as
|
||||||
|
they all set this flag when being created. This option is not
|
||||||
|
supported on Windows and some UNIX's. If the
|
||||||
|
:py:data:`~socket.SO_REUSEPORT` constant is not defined then this
|
||||||
|
capability is unsupported.
|
||||||
|
|
||||||
|
allow_broadcast tells the kernel to allow this endpoint to send
|
||||||
|
messages to the broadcast address.
|
||||||
|
|
||||||
|
sock can optionally be specified in order to use a preexisting
|
||||||
|
socket object.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Pipes and subprocesses.
|
||||||
|
|
||||||
|
async def connect_read_pipe(self, protocol_factory, pipe):
|
||||||
|
"""Register read pipe in event loop. Set the pipe to non-blocking mode.
|
||||||
|
|
||||||
|
protocol_factory should instantiate object with Protocol interface.
|
||||||
|
pipe is a file-like object.
|
||||||
|
Return pair (transport, protocol), where transport supports the
|
||||||
|
ReadTransport interface."""
|
||||||
|
# The reason to accept file-like object instead of just file descriptor
|
||||||
|
# is: we need to own pipe and close it at transport finishing
|
||||||
|
# Can got complicated errors if pass f.fileno(),
|
||||||
|
# close fd in pipe transport then close f and vise versa.
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def connect_write_pipe(self, protocol_factory, pipe):
|
||||||
|
"""Register write pipe in event loop.
|
||||||
|
|
||||||
|
protocol_factory should instantiate object with BaseProtocol interface.
|
||||||
|
Pipe is file-like object already switched to nonblocking.
|
||||||
|
Return pair (transport, protocol), where transport support
|
||||||
|
WriteTransport interface."""
|
||||||
|
# The reason to accept file-like object instead of just file descriptor
|
||||||
|
# is: we need to own pipe and close it at transport finishing
|
||||||
|
# Can got complicated errors if pass f.fileno(),
|
||||||
|
# close fd in pipe transport then close f and vise versa.
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def subprocess_shell(self, protocol_factory, cmd, *,
|
||||||
|
stdin=subprocess.PIPE,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.PIPE,
|
||||||
|
**kwargs):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def subprocess_exec(self, protocol_factory, *args,
|
||||||
|
stdin=subprocess.PIPE,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.PIPE,
|
||||||
|
**kwargs):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Ready-based callback registration methods.
|
||||||
|
# The add_*() methods return None.
|
||||||
|
# The remove_*() methods return True if something was removed,
|
||||||
|
# False if there was nothing to delete.
|
||||||
|
|
||||||
|
def add_reader(self, fd, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def remove_reader(self, fd):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def add_writer(self, fd, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def remove_writer(self, fd):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Completion based I/O methods returning Futures.
|
||||||
|
|
||||||
|
async def sock_recv(self, sock, nbytes):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sock_recv_into(self, sock, buf):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sock_sendall(self, sock, data):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sock_connect(self, sock, address):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sock_accept(self, sock):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
async def sock_sendfile(self, sock, file, offset=0, count=None,
|
||||||
|
*, fallback=None):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Signal handling.
|
||||||
|
|
||||||
|
def add_signal_handler(self, sig, callback, *args):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def remove_signal_handler(self, sig):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Task factory.
|
||||||
|
|
||||||
|
def set_task_factory(self, factory):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_task_factory(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Error handlers.
|
||||||
|
|
||||||
|
def get_exception_handler(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_exception_handler(self, handler):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def default_exception_handler(self, context):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def call_exception_handler(self, context):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Debug flag management.
|
||||||
|
|
||||||
|
def get_debug(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_debug(self, enabled):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class AbstractEventLoopPolicy:
|
||||||
|
"""Abstract policy for accessing the event loop."""
|
||||||
|
|
||||||
|
def get_event_loop(self):
|
||||||
|
"""Get the event loop for the current context.
|
||||||
|
|
||||||
|
Returns an event loop object implementing the BaseEventLoop interface,
|
||||||
|
or raises an exception in case no event loop has been set for the
|
||||||
|
current context and the current policy does not specify to create one.
|
||||||
|
|
||||||
|
It should never return None."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_event_loop(self, loop):
|
||||||
|
"""Set the event loop for the current context to loop."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def new_event_loop(self):
|
||||||
|
"""Create and return a new event loop object according to this
|
||||||
|
policy's rules. If there's need to set this loop as the event loop for
|
||||||
|
the current context, set_event_loop must be called explicitly."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Child processes handling (Unix only).
|
||||||
|
|
||||||
|
def get_child_watcher(self):
|
||||||
|
"Get the watcher for child processes."
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_child_watcher(self, watcher):
|
||||||
|
"""Set the watcher for child processes."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy):
|
||||||
|
"""Default policy implementation for accessing the event loop.
|
||||||
|
|
||||||
|
In this policy, each thread has its own event loop. However, we
|
||||||
|
only automatically create an event loop by default for the main
|
||||||
|
thread; other threads by default have no event loop.
|
||||||
|
|
||||||
|
Other policies may have different rules (e.g. a single global
|
||||||
|
event loop, or automatically creating an event loop per thread, or
|
||||||
|
using some other notion of context to which an event loop is
|
||||||
|
associated).
|
||||||
|
"""
|
||||||
|
|
||||||
|
_loop_factory = None
|
||||||
|
|
||||||
|
class _Local(threading.local):
|
||||||
|
_loop = None
|
||||||
|
_set_called = False
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._local = self._Local()
|
||||||
|
|
||||||
|
def get_event_loop(self):
|
||||||
|
"""Get the event loop.
|
||||||
|
|
||||||
|
This may be None or an instance of EventLoop.
|
||||||
|
"""
|
||||||
|
if (self._local._loop is None and
|
||||||
|
not self._local._set_called and
|
||||||
|
isinstance(threading.current_thread(), threading._MainThread)):
|
||||||
|
self.set_event_loop(self.new_event_loop())
|
||||||
|
|
||||||
|
if self._local._loop is None:
|
||||||
|
raise RuntimeError('There is no current event loop in thread %r.'
|
||||||
|
% threading.current_thread().name)
|
||||||
|
|
||||||
|
return self._local._loop
|
||||||
|
|
||||||
|
def set_event_loop(self, loop):
|
||||||
|
"""Set the event loop."""
|
||||||
|
self._local._set_called = True
|
||||||
|
assert loop is None or isinstance(loop, AbstractEventLoop)
|
||||||
|
self._local._loop = loop
|
||||||
|
|
||||||
|
def new_event_loop(self):
|
||||||
|
"""Create a new event loop.
|
||||||
|
|
||||||
|
You must call set_event_loop() to make this the current event
|
||||||
|
loop.
|
||||||
|
"""
|
||||||
|
return self._loop_factory()
|
||||||
|
|
||||||
|
|
||||||
|
# Event loop policy. The policy itself is always global, even if the
|
||||||
|
# policy's rules say that there is an event loop per thread (or other
|
||||||
|
# notion of context). The default policy is installed by the first
|
||||||
|
# call to get_event_loop_policy().
|
||||||
|
_event_loop_policy = None
|
||||||
|
|
||||||
|
# Lock for protecting the on-the-fly creation of the event loop policy.
|
||||||
|
_lock = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
# A TLS for the running event loop, used by _get_running_loop.
|
||||||
|
class _RunningLoop(threading.local):
|
||||||
|
loop_pid = (None, None)
|
||||||
|
|
||||||
|
|
||||||
|
_running_loop = _RunningLoop()
|
||||||
|
|
||||||
|
|
||||||
|
def get_running_loop():
|
||||||
|
"""Return the running event loop. Raise a RuntimeError if there is none.
|
||||||
|
|
||||||
|
This function is thread-specific.
|
||||||
|
"""
|
||||||
|
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||||
|
loop = _get_running_loop()
|
||||||
|
if loop is None:
|
||||||
|
raise RuntimeError('no running event loop')
|
||||||
|
return loop
|
||||||
|
|
||||||
|
|
||||||
|
def _get_running_loop():
|
||||||
|
"""Return the running event loop or None.
|
||||||
|
|
||||||
|
This is a low-level function intended to be used by event loops.
|
||||||
|
This function is thread-specific.
|
||||||
|
"""
|
||||||
|
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||||
|
running_loop, pid = _running_loop.loop_pid
|
||||||
|
if running_loop is not None and pid == os.getpid():
|
||||||
|
return running_loop
|
||||||
|
|
||||||
|
|
||||||
|
def _set_running_loop(loop):
|
||||||
|
"""Set the running event loop.
|
||||||
|
|
||||||
|
This is a low-level function intended to be used by event loops.
|
||||||
|
This function is thread-specific.
|
||||||
|
"""
|
||||||
|
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||||
|
_running_loop.loop_pid = (loop, os.getpid())
|
||||||
|
|
||||||
|
|
||||||
|
def _init_event_loop_policy():
|
||||||
|
global _event_loop_policy
|
||||||
|
with _lock:
|
||||||
|
if _event_loop_policy is None: # pragma: no branch
|
||||||
|
from . import DefaultEventLoopPolicy
|
||||||
|
_event_loop_policy = DefaultEventLoopPolicy()
|
||||||
|
|
||||||
|
|
||||||
|
def get_event_loop_policy():
|
||||||
|
"""Get the current event loop policy."""
|
||||||
|
if _event_loop_policy is None:
|
||||||
|
_init_event_loop_policy()
|
||||||
|
return _event_loop_policy
|
||||||
|
|
||||||
|
|
||||||
|
def set_event_loop_policy(policy):
|
||||||
|
"""Set the current event loop policy.
|
||||||
|
|
||||||
|
If policy is None, the default policy is restored."""
|
||||||
|
global _event_loop_policy
|
||||||
|
assert policy is None or isinstance(policy, AbstractEventLoopPolicy)
|
||||||
|
_event_loop_policy = policy
|
||||||
|
|
||||||
|
|
||||||
|
def get_event_loop():
|
||||||
|
"""Return an asyncio event loop.
|
||||||
|
|
||||||
|
When called from a coroutine or a callback (e.g. scheduled with call_soon
|
||||||
|
or similar API), this function will always return the running event loop.
|
||||||
|
|
||||||
|
If there is no running event loop set, the function will return
|
||||||
|
the result of `get_event_loop_policy().get_event_loop()` call.
|
||||||
|
"""
|
||||||
|
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||||
|
current_loop = _get_running_loop()
|
||||||
|
if current_loop is not None:
|
||||||
|
return current_loop
|
||||||
|
return get_event_loop_policy().get_event_loop()
|
||||||
|
|
||||||
|
|
||||||
|
def set_event_loop(loop):
|
||||||
|
"""Equivalent to calling get_event_loop_policy().set_event_loop(loop)."""
|
||||||
|
get_event_loop_policy().set_event_loop(loop)
|
||||||
|
|
||||||
|
|
||||||
|
def new_event_loop():
|
||||||
|
"""Equivalent to calling get_event_loop_policy().new_event_loop()."""
|
||||||
|
return get_event_loop_policy().new_event_loop()
|
||||||
|
|
||||||
|
|
||||||
|
def get_child_watcher():
|
||||||
|
"""Equivalent to calling get_event_loop_policy().get_child_watcher()."""
|
||||||
|
return get_event_loop_policy().get_child_watcher()
|
||||||
|
|
||||||
|
|
||||||
|
def set_child_watcher(watcher):
|
||||||
|
"""Equivalent to calling
|
||||||
|
get_event_loop_policy().set_child_watcher(watcher)."""
|
||||||
|
return get_event_loop_policy().set_child_watcher(watcher)
|
||||||
|
|
||||||
|
|
||||||
|
# Alias pure-Python implementations for testing purposes.
|
||||||
|
_py__get_running_loop = _get_running_loop
|
||||||
|
_py__set_running_loop = _set_running_loop
|
||||||
|
_py_get_running_loop = get_running_loop
|
||||||
|
_py_get_event_loop = get_event_loop
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
# get_event_loop() is one of the most frequently called
|
||||||
|
# functions in asyncio. Pure Python implementation is
|
||||||
|
# about 4 times slower than C-accelerated.
|
||||||
|
from _asyncio import (_get_running_loop, _set_running_loop,
|
||||||
|
get_running_loop, get_event_loop)
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# Alias C implementations for testing purposes.
|
||||||
|
_c__get_running_loop = _get_running_loop
|
||||||
|
_c__set_running_loop = _set_running_loop
|
||||||
|
_c_get_running_loop = get_running_loop
|
||||||
|
_c_get_event_loop = get_event_loop
|
76
Lib/asyncio/format_helpers.py
Normal file
76
Lib/asyncio/format_helpers.py
Normal file
|
@ -0,0 +1,76 @@
|
||||||
|
import functools
|
||||||
|
import inspect
|
||||||
|
import reprlib
|
||||||
|
import sys
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
from . import constants
|
||||||
|
|
||||||
|
|
||||||
|
def _get_function_source(func):
|
||||||
|
func = inspect.unwrap(func)
|
||||||
|
if inspect.isfunction(func):
|
||||||
|
code = func.__code__
|
||||||
|
return (code.co_filename, code.co_firstlineno)
|
||||||
|
if isinstance(func, functools.partial):
|
||||||
|
return _get_function_source(func.func)
|
||||||
|
if isinstance(func, functools.partialmethod):
|
||||||
|
return _get_function_source(func.func)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _format_callback_source(func, args):
|
||||||
|
func_repr = _format_callback(func, args, None)
|
||||||
|
source = _get_function_source(func)
|
||||||
|
if source:
|
||||||
|
func_repr += f' at {source[0]}:{source[1]}'
|
||||||
|
return func_repr
|
||||||
|
|
||||||
|
|
||||||
|
def _format_args_and_kwargs(args, kwargs):
|
||||||
|
"""Format function arguments and keyword arguments.
|
||||||
|
|
||||||
|
Special case for a single parameter: ('hello',) is formatted as ('hello').
|
||||||
|
"""
|
||||||
|
# use reprlib to limit the length of the output
|
||||||
|
items = []
|
||||||
|
if args:
|
||||||
|
items.extend(reprlib.repr(arg) for arg in args)
|
||||||
|
if kwargs:
|
||||||
|
items.extend(f'{k}={reprlib.repr(v)}' for k, v in kwargs.items())
|
||||||
|
return '({})'.format(', '.join(items))
|
||||||
|
|
||||||
|
|
||||||
|
def _format_callback(func, args, kwargs, suffix=''):
|
||||||
|
if isinstance(func, functools.partial):
|
||||||
|
suffix = _format_args_and_kwargs(args, kwargs) + suffix
|
||||||
|
return _format_callback(func.func, func.args, func.keywords, suffix)
|
||||||
|
|
||||||
|
if hasattr(func, '__qualname__') and func.__qualname__:
|
||||||
|
func_repr = func.__qualname__
|
||||||
|
elif hasattr(func, '__name__') and func.__name__:
|
||||||
|
func_repr = func.__name__
|
||||||
|
else:
|
||||||
|
func_repr = repr(func)
|
||||||
|
|
||||||
|
func_repr += _format_args_and_kwargs(args, kwargs)
|
||||||
|
if suffix:
|
||||||
|
func_repr += suffix
|
||||||
|
return func_repr
|
||||||
|
|
||||||
|
|
||||||
|
def extract_stack(f=None, limit=None):
|
||||||
|
"""Replacement for traceback.extract_stack() that only does the
|
||||||
|
necessary work for asyncio debug mode.
|
||||||
|
"""
|
||||||
|
if f is None:
|
||||||
|
f = sys._getframe().f_back
|
||||||
|
if limit is None:
|
||||||
|
# Limit the amount of work to a reasonable amount, as extract_stack()
|
||||||
|
# can be called for each coroutine and future in debug mode.
|
||||||
|
limit = constants.DEBUG_STACK_DEPTH
|
||||||
|
stack = traceback.StackSummary.extract(traceback.walk_stack(f),
|
||||||
|
limit=limit,
|
||||||
|
lookup_lines=False)
|
||||||
|
stack.reverse()
|
||||||
|
return stack
|
387
Lib/asyncio/futures.py
Normal file
387
Lib/asyncio/futures.py
Normal file
|
@ -0,0 +1,387 @@
|
||||||
|
"""A Future class similar to the one in PEP 3148."""
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'CancelledError', 'TimeoutError', 'InvalidStateError',
|
||||||
|
'Future', 'wrap_future', 'isfuture',
|
||||||
|
)
|
||||||
|
|
||||||
|
import concurrent.futures
|
||||||
|
import contextvars
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import base_futures
|
||||||
|
from . import events
|
||||||
|
from . import format_helpers
|
||||||
|
|
||||||
|
|
||||||
|
CancelledError = base_futures.CancelledError
|
||||||
|
InvalidStateError = base_futures.InvalidStateError
|
||||||
|
TimeoutError = base_futures.TimeoutError
|
||||||
|
isfuture = base_futures.isfuture
|
||||||
|
|
||||||
|
|
||||||
|
_PENDING = base_futures._PENDING
|
||||||
|
_CANCELLED = base_futures._CANCELLED
|
||||||
|
_FINISHED = base_futures._FINISHED
|
||||||
|
|
||||||
|
|
||||||
|
STACK_DEBUG = logging.DEBUG - 1 # heavy-duty debugging
|
||||||
|
|
||||||
|
|
||||||
|
class Future:
|
||||||
|
"""This class is *almost* compatible with concurrent.futures.Future.
|
||||||
|
|
||||||
|
Differences:
|
||||||
|
|
||||||
|
- This class is not thread-safe.
|
||||||
|
|
||||||
|
- result() and exception() do not take a timeout argument and
|
||||||
|
raise an exception when the future isn't done yet.
|
||||||
|
|
||||||
|
- Callbacks registered with add_done_callback() are always called
|
||||||
|
via the event loop's call_soon().
|
||||||
|
|
||||||
|
- This class is not compatible with the wait() and as_completed()
|
||||||
|
methods in the concurrent.futures package.
|
||||||
|
|
||||||
|
(In Python 3.4 or later we may be able to unify the implementations.)
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Class variables serving as defaults for instance variables.
|
||||||
|
_state = _PENDING
|
||||||
|
_result = None
|
||||||
|
_exception = None
|
||||||
|
_loop = None
|
||||||
|
_source_traceback = None
|
||||||
|
|
||||||
|
# This field is used for a dual purpose:
|
||||||
|
# - Its presence is a marker to declare that a class implements
|
||||||
|
# the Future protocol (i.e. is intended to be duck-type compatible).
|
||||||
|
# The value must also be not-None, to enable a subclass to declare
|
||||||
|
# that it is not compatible by setting this to None.
|
||||||
|
# - It is set by __iter__() below so that Task._step() can tell
|
||||||
|
# the difference between
|
||||||
|
# `await Future()` or`yield from Future()` (correct) vs.
|
||||||
|
# `yield Future()` (incorrect).
|
||||||
|
_asyncio_future_blocking = False
|
||||||
|
|
||||||
|
__log_traceback = False
|
||||||
|
|
||||||
|
def __init__(self, *, loop=None):
|
||||||
|
"""Initialize the future.
|
||||||
|
|
||||||
|
The optional event_loop argument allows explicitly setting the event
|
||||||
|
loop object used by the future. If it's not provided, the future uses
|
||||||
|
the default event loop.
|
||||||
|
"""
|
||||||
|
if loop is None:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
else:
|
||||||
|
self._loop = loop
|
||||||
|
self._callbacks = []
|
||||||
|
if self._loop.get_debug():
|
||||||
|
self._source_traceback = format_helpers.extract_stack(
|
||||||
|
sys._getframe(1))
|
||||||
|
|
||||||
|
_repr_info = base_futures._future_repr_info
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<{} {}>'.format(self.__class__.__name__,
|
||||||
|
' '.join(self._repr_info()))
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if not self.__log_traceback:
|
||||||
|
# set_exception() was not called, or result() or exception()
|
||||||
|
# has consumed the exception
|
||||||
|
return
|
||||||
|
exc = self._exception
|
||||||
|
context = {
|
||||||
|
'message':
|
||||||
|
f'{self.__class__.__name__} exception was never retrieved',
|
||||||
|
'exception': exc,
|
||||||
|
'future': self,
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _log_traceback(self):
|
||||||
|
return self.__log_traceback
|
||||||
|
|
||||||
|
@_log_traceback.setter
|
||||||
|
def _log_traceback(self, val):
|
||||||
|
if bool(val):
|
||||||
|
raise ValueError('_log_traceback can only be set to False')
|
||||||
|
self.__log_traceback = False
|
||||||
|
|
||||||
|
def get_loop(self):
|
||||||
|
"""Return the event loop the Future is bound to."""
|
||||||
|
return self._loop
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
"""Cancel the future and schedule callbacks.
|
||||||
|
|
||||||
|
If the future is already done or cancelled, return False. Otherwise,
|
||||||
|
change the future's state to cancelled, schedule the callbacks and
|
||||||
|
return True.
|
||||||
|
"""
|
||||||
|
self.__log_traceback = False
|
||||||
|
if self._state != _PENDING:
|
||||||
|
return False
|
||||||
|
self._state = _CANCELLED
|
||||||
|
self.__schedule_callbacks()
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __schedule_callbacks(self):
|
||||||
|
"""Internal: Ask the event loop to call all callbacks.
|
||||||
|
|
||||||
|
The callbacks are scheduled to be called as soon as possible. Also
|
||||||
|
clears the callback list.
|
||||||
|
"""
|
||||||
|
callbacks = self._callbacks[:]
|
||||||
|
if not callbacks:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._callbacks[:] = []
|
||||||
|
for callback, ctx in callbacks:
|
||||||
|
self._loop.call_soon(callback, self, context=ctx)
|
||||||
|
|
||||||
|
def cancelled(self):
|
||||||
|
"""Return True if the future was cancelled."""
|
||||||
|
return self._state == _CANCELLED
|
||||||
|
|
||||||
|
# Don't implement running(); see http://bugs.python.org/issue18699
|
||||||
|
|
||||||
|
def done(self):
|
||||||
|
"""Return True if the future is done.
|
||||||
|
|
||||||
|
Done means either that a result / exception are available, or that the
|
||||||
|
future was cancelled.
|
||||||
|
"""
|
||||||
|
return self._state != _PENDING
|
||||||
|
|
||||||
|
def result(self):
|
||||||
|
"""Return the result this future represents.
|
||||||
|
|
||||||
|
If the future has been cancelled, raises CancelledError. If the
|
||||||
|
future's result isn't yet available, raises InvalidStateError. If
|
||||||
|
the future is done and has an exception set, this exception is raised.
|
||||||
|
"""
|
||||||
|
if self._state == _CANCELLED:
|
||||||
|
raise CancelledError
|
||||||
|
if self._state != _FINISHED:
|
||||||
|
raise InvalidStateError('Result is not ready.')
|
||||||
|
self.__log_traceback = False
|
||||||
|
if self._exception is not None:
|
||||||
|
raise self._exception
|
||||||
|
return self._result
|
||||||
|
|
||||||
|
def exception(self):
|
||||||
|
"""Return the exception that was set on this future.
|
||||||
|
|
||||||
|
The exception (or None if no exception was set) is returned only if
|
||||||
|
the future is done. If the future has been cancelled, raises
|
||||||
|
CancelledError. If the future isn't done yet, raises
|
||||||
|
InvalidStateError.
|
||||||
|
"""
|
||||||
|
if self._state == _CANCELLED:
|
||||||
|
raise CancelledError
|
||||||
|
if self._state != _FINISHED:
|
||||||
|
raise InvalidStateError('Exception is not set.')
|
||||||
|
self.__log_traceback = False
|
||||||
|
return self._exception
|
||||||
|
|
||||||
|
def add_done_callback(self, fn, *, context=None):
|
||||||
|
"""Add a callback to be run when the future becomes done.
|
||||||
|
|
||||||
|
The callback is called with a single argument - the future object. If
|
||||||
|
the future is already done when this is called, the callback is
|
||||||
|
scheduled with call_soon.
|
||||||
|
"""
|
||||||
|
if self._state != _PENDING:
|
||||||
|
self._loop.call_soon(fn, self, context=context)
|
||||||
|
else:
|
||||||
|
if context is None:
|
||||||
|
context = contextvars.copy_context()
|
||||||
|
self._callbacks.append((fn, context))
|
||||||
|
|
||||||
|
# New method not in PEP 3148.
|
||||||
|
|
||||||
|
def remove_done_callback(self, fn):
|
||||||
|
"""Remove all instances of a callback from the "call when done" list.
|
||||||
|
|
||||||
|
Returns the number of callbacks removed.
|
||||||
|
"""
|
||||||
|
filtered_callbacks = [(f, ctx)
|
||||||
|
for (f, ctx) in self._callbacks
|
||||||
|
if f != fn]
|
||||||
|
removed_count = len(self._callbacks) - len(filtered_callbacks)
|
||||||
|
if removed_count:
|
||||||
|
self._callbacks[:] = filtered_callbacks
|
||||||
|
return removed_count
|
||||||
|
|
||||||
|
# So-called internal methods (note: no set_running_or_notify_cancel()).
|
||||||
|
|
||||||
|
def set_result(self, result):
|
||||||
|
"""Mark the future done and set its result.
|
||||||
|
|
||||||
|
If the future is already done when this method is called, raises
|
||||||
|
InvalidStateError.
|
||||||
|
"""
|
||||||
|
if self._state != _PENDING:
|
||||||
|
raise InvalidStateError('{}: {!r}'.format(self._state, self))
|
||||||
|
self._result = result
|
||||||
|
self._state = _FINISHED
|
||||||
|
self.__schedule_callbacks()
|
||||||
|
|
||||||
|
def set_exception(self, exception):
|
||||||
|
"""Mark the future done and set an exception.
|
||||||
|
|
||||||
|
If the future is already done when this method is called, raises
|
||||||
|
InvalidStateError.
|
||||||
|
"""
|
||||||
|
if self._state != _PENDING:
|
||||||
|
raise InvalidStateError('{}: {!r}'.format(self._state, self))
|
||||||
|
if isinstance(exception, type):
|
||||||
|
exception = exception()
|
||||||
|
if type(exception) is StopIteration:
|
||||||
|
raise TypeError("StopIteration interacts badly with generators "
|
||||||
|
"and cannot be raised into a Future")
|
||||||
|
self._exception = exception
|
||||||
|
self._state = _FINISHED
|
||||||
|
self.__schedule_callbacks()
|
||||||
|
self.__log_traceback = True
|
||||||
|
|
||||||
|
def __await__(self):
|
||||||
|
if not self.done():
|
||||||
|
self._asyncio_future_blocking = True
|
||||||
|
yield self # This tells Task to wait for completion.
|
||||||
|
if not self.done():
|
||||||
|
raise RuntimeError("await wasn't used with future")
|
||||||
|
return self.result() # May raise too.
|
||||||
|
|
||||||
|
__iter__ = __await__ # make compatible with 'yield from'.
|
||||||
|
|
||||||
|
|
||||||
|
# Needed for testing purposes.
|
||||||
|
_PyFuture = Future
|
||||||
|
|
||||||
|
|
||||||
|
def _get_loop(fut):
|
||||||
|
# Tries to call Future.get_loop() if it's available.
|
||||||
|
# Otherwise fallbacks to using the old '_loop' property.
|
||||||
|
try:
|
||||||
|
get_loop = fut.get_loop
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
return get_loop()
|
||||||
|
return fut._loop
|
||||||
|
|
||||||
|
|
||||||
|
def _set_result_unless_cancelled(fut, result):
|
||||||
|
"""Helper setting the result only if the future was not cancelled."""
|
||||||
|
if fut.cancelled():
|
||||||
|
return
|
||||||
|
fut.set_result(result)
|
||||||
|
|
||||||
|
|
||||||
|
def _set_concurrent_future_state(concurrent, source):
|
||||||
|
"""Copy state from a future to a concurrent.futures.Future."""
|
||||||
|
assert source.done()
|
||||||
|
if source.cancelled():
|
||||||
|
concurrent.cancel()
|
||||||
|
if not concurrent.set_running_or_notify_cancel():
|
||||||
|
return
|
||||||
|
exception = source.exception()
|
||||||
|
if exception is not None:
|
||||||
|
concurrent.set_exception(exception)
|
||||||
|
else:
|
||||||
|
result = source.result()
|
||||||
|
concurrent.set_result(result)
|
||||||
|
|
||||||
|
|
||||||
|
def _copy_future_state(source, dest):
|
||||||
|
"""Internal helper to copy state from another Future.
|
||||||
|
|
||||||
|
The other Future may be a concurrent.futures.Future.
|
||||||
|
"""
|
||||||
|
assert source.done()
|
||||||
|
if dest.cancelled():
|
||||||
|
return
|
||||||
|
assert not dest.done()
|
||||||
|
if source.cancelled():
|
||||||
|
dest.cancel()
|
||||||
|
else:
|
||||||
|
exception = source.exception()
|
||||||
|
if exception is not None:
|
||||||
|
dest.set_exception(exception)
|
||||||
|
else:
|
||||||
|
result = source.result()
|
||||||
|
dest.set_result(result)
|
||||||
|
|
||||||
|
|
||||||
|
def _chain_future(source, destination):
|
||||||
|
"""Chain two futures so that when one completes, so does the other.
|
||||||
|
|
||||||
|
The result (or exception) of source will be copied to destination.
|
||||||
|
If destination is cancelled, source gets cancelled too.
|
||||||
|
Compatible with both asyncio.Future and concurrent.futures.Future.
|
||||||
|
"""
|
||||||
|
if not isfuture(source) and not isinstance(source,
|
||||||
|
concurrent.futures.Future):
|
||||||
|
raise TypeError('A future is required for source argument')
|
||||||
|
if not isfuture(destination) and not isinstance(destination,
|
||||||
|
concurrent.futures.Future):
|
||||||
|
raise TypeError('A future is required for destination argument')
|
||||||
|
source_loop = _get_loop(source) if isfuture(source) else None
|
||||||
|
dest_loop = _get_loop(destination) if isfuture(destination) else None
|
||||||
|
|
||||||
|
def _set_state(future, other):
|
||||||
|
if isfuture(future):
|
||||||
|
_copy_future_state(other, future)
|
||||||
|
else:
|
||||||
|
_set_concurrent_future_state(future, other)
|
||||||
|
|
||||||
|
def _call_check_cancel(destination):
|
||||||
|
if destination.cancelled():
|
||||||
|
if source_loop is None or source_loop is dest_loop:
|
||||||
|
source.cancel()
|
||||||
|
else:
|
||||||
|
source_loop.call_soon_threadsafe(source.cancel)
|
||||||
|
|
||||||
|
def _call_set_state(source):
|
||||||
|
if (destination.cancelled() and
|
||||||
|
dest_loop is not None and dest_loop.is_closed()):
|
||||||
|
return
|
||||||
|
if dest_loop is None or dest_loop is source_loop:
|
||||||
|
_set_state(destination, source)
|
||||||
|
else:
|
||||||
|
dest_loop.call_soon_threadsafe(_set_state, destination, source)
|
||||||
|
|
||||||
|
destination.add_done_callback(_call_check_cancel)
|
||||||
|
source.add_done_callback(_call_set_state)
|
||||||
|
|
||||||
|
|
||||||
|
def wrap_future(future, *, loop=None):
|
||||||
|
"""Wrap concurrent.futures.Future object."""
|
||||||
|
if isfuture(future):
|
||||||
|
return future
|
||||||
|
assert isinstance(future, concurrent.futures.Future), \
|
||||||
|
f'concurrent.futures.Future is expected, got {future!r}'
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
new_future = loop.create_future()
|
||||||
|
_chain_future(future, new_future)
|
||||||
|
return new_future
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
import _asyncio
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# _CFuture is needed for tests.
|
||||||
|
Future = _CFuture = _asyncio.Future
|
507
Lib/asyncio/locks.py
Normal file
507
Lib/asyncio/locks.py
Normal file
|
@ -0,0 +1,507 @@
|
||||||
|
"""Synchronization primitives."""
|
||||||
|
|
||||||
|
__all__ = ('Lock', 'Event', 'Condition', 'Semaphore', 'BoundedSemaphore')
|
||||||
|
|
||||||
|
import collections
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from . import events
|
||||||
|
from . import futures
|
||||||
|
from .coroutines import coroutine
|
||||||
|
|
||||||
|
|
||||||
|
class _ContextManager:
|
||||||
|
"""Context manager.
|
||||||
|
|
||||||
|
This enables the following idiom for acquiring and releasing a
|
||||||
|
lock around a block:
|
||||||
|
|
||||||
|
with (yield from lock):
|
||||||
|
<block>
|
||||||
|
|
||||||
|
while failing loudly when accidentally using:
|
||||||
|
|
||||||
|
with lock:
|
||||||
|
<block>
|
||||||
|
|
||||||
|
Deprecated, use 'async with' statement:
|
||||||
|
async with lock:
|
||||||
|
<block>
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, lock):
|
||||||
|
self._lock = lock
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
# We have no use for the "as ..." clause in the with
|
||||||
|
# statement for locks.
|
||||||
|
return None
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
try:
|
||||||
|
self._lock.release()
|
||||||
|
finally:
|
||||||
|
self._lock = None # Crudely prevent reuse.
|
||||||
|
|
||||||
|
|
||||||
|
class _ContextManagerMixin:
|
||||||
|
def __enter__(self):
|
||||||
|
raise RuntimeError(
|
||||||
|
'"yield from" should be used as context manager expression')
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
# This must exist because __enter__ exists, even though that
|
||||||
|
# always raises; that's how the with-statement works.
|
||||||
|
pass
|
||||||
|
|
||||||
|
@coroutine
|
||||||
|
def __iter__(self):
|
||||||
|
# This is not a coroutine. It is meant to enable the idiom:
|
||||||
|
#
|
||||||
|
# with (yield from lock):
|
||||||
|
# <block>
|
||||||
|
#
|
||||||
|
# as an alternative to:
|
||||||
|
#
|
||||||
|
# yield from lock.acquire()
|
||||||
|
# try:
|
||||||
|
# <block>
|
||||||
|
# finally:
|
||||||
|
# lock.release()
|
||||||
|
# Deprecated, use 'async with' statement:
|
||||||
|
# async with lock:
|
||||||
|
# <block>
|
||||||
|
warnings.warn("'with (yield from lock)' is deprecated "
|
||||||
|
"use 'async with lock' instead",
|
||||||
|
DeprecationWarning, stacklevel=2)
|
||||||
|
yield from self.acquire()
|
||||||
|
return _ContextManager(self)
|
||||||
|
|
||||||
|
async def __acquire_ctx(self):
|
||||||
|
await self.acquire()
|
||||||
|
return _ContextManager(self)
|
||||||
|
|
||||||
|
def __await__(self):
|
||||||
|
warnings.warn("'with await lock' is deprecated "
|
||||||
|
"use 'async with lock' instead",
|
||||||
|
DeprecationWarning, stacklevel=2)
|
||||||
|
# To make "with await lock" work.
|
||||||
|
return self.__acquire_ctx().__await__()
|
||||||
|
|
||||||
|
async def __aenter__(self):
|
||||||
|
await self.acquire()
|
||||||
|
# We have no use for the "as ..." clause in the with
|
||||||
|
# statement for locks.
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def __aexit__(self, exc_type, exc, tb):
|
||||||
|
self.release()
|
||||||
|
|
||||||
|
|
||||||
|
class Lock(_ContextManagerMixin):
|
||||||
|
"""Primitive lock objects.
|
||||||
|
|
||||||
|
A primitive lock is a synchronization primitive that is not owned
|
||||||
|
by a particular coroutine when locked. A primitive lock is in one
|
||||||
|
of two states, 'locked' or 'unlocked'.
|
||||||
|
|
||||||
|
It is created in the unlocked state. It has two basic methods,
|
||||||
|
acquire() and release(). When the state is unlocked, acquire()
|
||||||
|
changes the state to locked and returns immediately. When the
|
||||||
|
state is locked, acquire() blocks until a call to release() in
|
||||||
|
another coroutine changes it to unlocked, then the acquire() call
|
||||||
|
resets it to locked and returns. The release() method should only
|
||||||
|
be called in the locked state; it changes the state to unlocked
|
||||||
|
and returns immediately. If an attempt is made to release an
|
||||||
|
unlocked lock, a RuntimeError will be raised.
|
||||||
|
|
||||||
|
When more than one coroutine is blocked in acquire() waiting for
|
||||||
|
the state to turn to unlocked, only one coroutine proceeds when a
|
||||||
|
release() call resets the state to unlocked; first coroutine which
|
||||||
|
is blocked in acquire() is being processed.
|
||||||
|
|
||||||
|
acquire() is a coroutine and should be called with 'await'.
|
||||||
|
|
||||||
|
Locks also support the asynchronous context management protocol.
|
||||||
|
'async with lock' statement should be used.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
lock = Lock()
|
||||||
|
...
|
||||||
|
await lock.acquire()
|
||||||
|
try:
|
||||||
|
...
|
||||||
|
finally:
|
||||||
|
lock.release()
|
||||||
|
|
||||||
|
Context manager usage:
|
||||||
|
|
||||||
|
lock = Lock()
|
||||||
|
...
|
||||||
|
async with lock:
|
||||||
|
...
|
||||||
|
|
||||||
|
Lock objects can be tested for locking state:
|
||||||
|
|
||||||
|
if not lock.locked():
|
||||||
|
await lock.acquire()
|
||||||
|
else:
|
||||||
|
# lock is acquired
|
||||||
|
...
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, *, loop=None):
|
||||||
|
self._waiters = collections.deque()
|
||||||
|
self._locked = False
|
||||||
|
if loop is not None:
|
||||||
|
self._loop = loop
|
||||||
|
else:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
res = super().__repr__()
|
||||||
|
extra = 'locked' if self._locked else 'unlocked'
|
||||||
|
if self._waiters:
|
||||||
|
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||||
|
return f'<{res[1:-1]} [{extra}]>'
|
||||||
|
|
||||||
|
def locked(self):
|
||||||
|
"""Return True if lock is acquired."""
|
||||||
|
return self._locked
|
||||||
|
|
||||||
|
async def acquire(self):
|
||||||
|
"""Acquire a lock.
|
||||||
|
|
||||||
|
This method blocks until the lock is unlocked, then sets it to
|
||||||
|
locked and returns True.
|
||||||
|
"""
|
||||||
|
if not self._locked and all(w.cancelled() for w in self._waiters):
|
||||||
|
self._locked = True
|
||||||
|
return True
|
||||||
|
|
||||||
|
fut = self._loop.create_future()
|
||||||
|
self._waiters.append(fut)
|
||||||
|
|
||||||
|
# Finally block should be called before the CancelledError
|
||||||
|
# handling as we don't want CancelledError to call
|
||||||
|
# _wake_up_first() and attempt to wake up itself.
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
await fut
|
||||||
|
finally:
|
||||||
|
self._waiters.remove(fut)
|
||||||
|
except futures.CancelledError:
|
||||||
|
if not self._locked:
|
||||||
|
self._wake_up_first()
|
||||||
|
raise
|
||||||
|
|
||||||
|
self._locked = True
|
||||||
|
return True
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
"""Release a lock.
|
||||||
|
|
||||||
|
When the lock is locked, reset it to unlocked, and return.
|
||||||
|
If any other coroutines are blocked waiting for the lock to become
|
||||||
|
unlocked, allow exactly one of them to proceed.
|
||||||
|
|
||||||
|
When invoked on an unlocked lock, a RuntimeError is raised.
|
||||||
|
|
||||||
|
There is no return value.
|
||||||
|
"""
|
||||||
|
if self._locked:
|
||||||
|
self._locked = False
|
||||||
|
self._wake_up_first()
|
||||||
|
else:
|
||||||
|
raise RuntimeError('Lock is not acquired.')
|
||||||
|
|
||||||
|
def _wake_up_first(self):
|
||||||
|
"""Wake up the first waiter if it isn't done."""
|
||||||
|
try:
|
||||||
|
fut = next(iter(self._waiters))
|
||||||
|
except StopIteration:
|
||||||
|
return
|
||||||
|
|
||||||
|
# .done() necessarily means that a waiter will wake up later on and
|
||||||
|
# either take the lock, or, if it was cancelled and lock wasn't
|
||||||
|
# taken already, will hit this again and wake up a new waiter.
|
||||||
|
if not fut.done():
|
||||||
|
fut.set_result(True)
|
||||||
|
|
||||||
|
|
||||||
|
class Event:
|
||||||
|
"""Asynchronous equivalent to threading.Event.
|
||||||
|
|
||||||
|
Class implementing event objects. An event manages a flag that can be set
|
||||||
|
to true with the set() method and reset to false with the clear() method.
|
||||||
|
The wait() method blocks until the flag is true. The flag is initially
|
||||||
|
false.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, *, loop=None):
|
||||||
|
self._waiters = collections.deque()
|
||||||
|
self._value = False
|
||||||
|
if loop is not None:
|
||||||
|
self._loop = loop
|
||||||
|
else:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
res = super().__repr__()
|
||||||
|
extra = 'set' if self._value else 'unset'
|
||||||
|
if self._waiters:
|
||||||
|
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||||
|
return f'<{res[1:-1]} [{extra}]>'
|
||||||
|
|
||||||
|
def is_set(self):
|
||||||
|
"""Return True if and only if the internal flag is true."""
|
||||||
|
return self._value
|
||||||
|
|
||||||
|
def set(self):
|
||||||
|
"""Set the internal flag to true. All coroutines waiting for it to
|
||||||
|
become true are awakened. Coroutine that call wait() once the flag is
|
||||||
|
true will not block at all.
|
||||||
|
"""
|
||||||
|
if not self._value:
|
||||||
|
self._value = True
|
||||||
|
|
||||||
|
for fut in self._waiters:
|
||||||
|
if not fut.done():
|
||||||
|
fut.set_result(True)
|
||||||
|
|
||||||
|
def clear(self):
|
||||||
|
"""Reset the internal flag to false. Subsequently, coroutines calling
|
||||||
|
wait() will block until set() is called to set the internal flag
|
||||||
|
to true again."""
|
||||||
|
self._value = False
|
||||||
|
|
||||||
|
async def wait(self):
|
||||||
|
"""Block until the internal flag is true.
|
||||||
|
|
||||||
|
If the internal flag is true on entry, return True
|
||||||
|
immediately. Otherwise, block until another coroutine calls
|
||||||
|
set() to set the flag to true, then return True.
|
||||||
|
"""
|
||||||
|
if self._value:
|
||||||
|
return True
|
||||||
|
|
||||||
|
fut = self._loop.create_future()
|
||||||
|
self._waiters.append(fut)
|
||||||
|
try:
|
||||||
|
await fut
|
||||||
|
return True
|
||||||
|
finally:
|
||||||
|
self._waiters.remove(fut)
|
||||||
|
|
||||||
|
|
||||||
|
class Condition(_ContextManagerMixin):
|
||||||
|
"""Asynchronous equivalent to threading.Condition.
|
||||||
|
|
||||||
|
This class implements condition variable objects. A condition variable
|
||||||
|
allows one or more coroutines to wait until they are notified by another
|
||||||
|
coroutine.
|
||||||
|
|
||||||
|
A new Lock object is created and used as the underlying lock.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, lock=None, *, loop=None):
|
||||||
|
if loop is not None:
|
||||||
|
self._loop = loop
|
||||||
|
else:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
|
||||||
|
if lock is None:
|
||||||
|
lock = Lock(loop=self._loop)
|
||||||
|
elif lock._loop is not self._loop:
|
||||||
|
raise ValueError("loop argument must agree with lock")
|
||||||
|
|
||||||
|
self._lock = lock
|
||||||
|
# Export the lock's locked(), acquire() and release() methods.
|
||||||
|
self.locked = lock.locked
|
||||||
|
self.acquire = lock.acquire
|
||||||
|
self.release = lock.release
|
||||||
|
|
||||||
|
self._waiters = collections.deque()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
res = super().__repr__()
|
||||||
|
extra = 'locked' if self.locked() else 'unlocked'
|
||||||
|
if self._waiters:
|
||||||
|
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||||
|
return f'<{res[1:-1]} [{extra}]>'
|
||||||
|
|
||||||
|
async def wait(self):
|
||||||
|
"""Wait until notified.
|
||||||
|
|
||||||
|
If the calling coroutine has not acquired the lock when this
|
||||||
|
method is called, a RuntimeError is raised.
|
||||||
|
|
||||||
|
This method releases the underlying lock, and then blocks
|
||||||
|
until it is awakened by a notify() or notify_all() call for
|
||||||
|
the same condition variable in another coroutine. Once
|
||||||
|
awakened, it re-acquires the lock and returns True.
|
||||||
|
"""
|
||||||
|
if not self.locked():
|
||||||
|
raise RuntimeError('cannot wait on un-acquired lock')
|
||||||
|
|
||||||
|
self.release()
|
||||||
|
try:
|
||||||
|
fut = self._loop.create_future()
|
||||||
|
self._waiters.append(fut)
|
||||||
|
try:
|
||||||
|
await fut
|
||||||
|
return True
|
||||||
|
finally:
|
||||||
|
self._waiters.remove(fut)
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Must reacquire lock even if wait is cancelled
|
||||||
|
cancelled = False
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
await self.acquire()
|
||||||
|
break
|
||||||
|
except futures.CancelledError:
|
||||||
|
cancelled = True
|
||||||
|
|
||||||
|
if cancelled:
|
||||||
|
raise futures.CancelledError
|
||||||
|
|
||||||
|
async def wait_for(self, predicate):
|
||||||
|
"""Wait until a predicate becomes true.
|
||||||
|
|
||||||
|
The predicate should be a callable which result will be
|
||||||
|
interpreted as a boolean value. The final predicate value is
|
||||||
|
the return value.
|
||||||
|
"""
|
||||||
|
result = predicate()
|
||||||
|
while not result:
|
||||||
|
await self.wait()
|
||||||
|
result = predicate()
|
||||||
|
return result
|
||||||
|
|
||||||
|
def notify(self, n=1):
|
||||||
|
"""By default, wake up one coroutine waiting on this condition, if any.
|
||||||
|
If the calling coroutine has not acquired the lock when this method
|
||||||
|
is called, a RuntimeError is raised.
|
||||||
|
|
||||||
|
This method wakes up at most n of the coroutines waiting for the
|
||||||
|
condition variable; it is a no-op if no coroutines are waiting.
|
||||||
|
|
||||||
|
Note: an awakened coroutine does not actually return from its
|
||||||
|
wait() call until it can reacquire the lock. Since notify() does
|
||||||
|
not release the lock, its caller should.
|
||||||
|
"""
|
||||||
|
if not self.locked():
|
||||||
|
raise RuntimeError('cannot notify on un-acquired lock')
|
||||||
|
|
||||||
|
idx = 0
|
||||||
|
for fut in self._waiters:
|
||||||
|
if idx >= n:
|
||||||
|
break
|
||||||
|
|
||||||
|
if not fut.done():
|
||||||
|
idx += 1
|
||||||
|
fut.set_result(False)
|
||||||
|
|
||||||
|
def notify_all(self):
|
||||||
|
"""Wake up all threads waiting on this condition. This method acts
|
||||||
|
like notify(), but wakes up all waiting threads instead of one. If the
|
||||||
|
calling thread has not acquired the lock when this method is called,
|
||||||
|
a RuntimeError is raised.
|
||||||
|
"""
|
||||||
|
self.notify(len(self._waiters))
|
||||||
|
|
||||||
|
|
||||||
|
class Semaphore(_ContextManagerMixin):
|
||||||
|
"""A Semaphore implementation.
|
||||||
|
|
||||||
|
A semaphore manages an internal counter which is decremented by each
|
||||||
|
acquire() call and incremented by each release() call. The counter
|
||||||
|
can never go below zero; when acquire() finds that it is zero, it blocks,
|
||||||
|
waiting until some other thread calls release().
|
||||||
|
|
||||||
|
Semaphores also support the context management protocol.
|
||||||
|
|
||||||
|
The optional argument gives the initial value for the internal
|
||||||
|
counter; it defaults to 1. If the value given is less than 0,
|
||||||
|
ValueError is raised.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, value=1, *, loop=None):
|
||||||
|
if value < 0:
|
||||||
|
raise ValueError("Semaphore initial value must be >= 0")
|
||||||
|
self._value = value
|
||||||
|
self._waiters = collections.deque()
|
||||||
|
if loop is not None:
|
||||||
|
self._loop = loop
|
||||||
|
else:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
res = super().__repr__()
|
||||||
|
extra = 'locked' if self.locked() else f'unlocked, value:{self._value}'
|
||||||
|
if self._waiters:
|
||||||
|
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||||
|
return f'<{res[1:-1]} [{extra}]>'
|
||||||
|
|
||||||
|
def _wake_up_next(self):
|
||||||
|
while self._waiters:
|
||||||
|
waiter = self._waiters.popleft()
|
||||||
|
if not waiter.done():
|
||||||
|
waiter.set_result(None)
|
||||||
|
return
|
||||||
|
|
||||||
|
def locked(self):
|
||||||
|
"""Returns True if semaphore can not be acquired immediately."""
|
||||||
|
return self._value == 0
|
||||||
|
|
||||||
|
async def acquire(self):
|
||||||
|
"""Acquire a semaphore.
|
||||||
|
|
||||||
|
If the internal counter is larger than zero on entry,
|
||||||
|
decrement it by one and return True immediately. If it is
|
||||||
|
zero on entry, block, waiting until some other coroutine has
|
||||||
|
called release() to make it larger than 0, and then return
|
||||||
|
True.
|
||||||
|
"""
|
||||||
|
while self._value <= 0:
|
||||||
|
fut = self._loop.create_future()
|
||||||
|
self._waiters.append(fut)
|
||||||
|
try:
|
||||||
|
await fut
|
||||||
|
except:
|
||||||
|
# See the similar code in Queue.get.
|
||||||
|
fut.cancel()
|
||||||
|
if self._value > 0 and not fut.cancelled():
|
||||||
|
self._wake_up_next()
|
||||||
|
raise
|
||||||
|
self._value -= 1
|
||||||
|
return True
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
"""Release a semaphore, incrementing the internal counter by one.
|
||||||
|
When it was zero on entry and another coroutine is waiting for it to
|
||||||
|
become larger than zero again, wake up that coroutine.
|
||||||
|
"""
|
||||||
|
self._value += 1
|
||||||
|
self._wake_up_next()
|
||||||
|
|
||||||
|
|
||||||
|
class BoundedSemaphore(Semaphore):
|
||||||
|
"""A bounded semaphore implementation.
|
||||||
|
|
||||||
|
This raises ValueError in release() if it would increase the value
|
||||||
|
above the initial value.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, value=1, *, loop=None):
|
||||||
|
self._bound_value = value
|
||||||
|
super().__init__(value, loop=loop)
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
if self._value >= self._bound_value:
|
||||||
|
raise ValueError('BoundedSemaphore released too many times')
|
||||||
|
super().release()
|
7
Lib/asyncio/log.py
Normal file
7
Lib/asyncio/log.py
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
"""Logging configuration."""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
# Name the logger after the package.
|
||||||
|
logger = logging.getLogger(__package__)
|
696
Lib/asyncio/proactor_events.py
Normal file
696
Lib/asyncio/proactor_events.py
Normal file
|
@ -0,0 +1,696 @@
|
||||||
|
"""Event loop using a proactor and related classes.
|
||||||
|
|
||||||
|
A proactor is a "notify-on-completion" multiplexer. Currently a
|
||||||
|
proactor is only implemented on Windows with IOCP.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__all__ = 'BaseProactorEventLoop',
|
||||||
|
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import socket
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from . import base_events
|
||||||
|
from . import constants
|
||||||
|
from . import events
|
||||||
|
from . import futures
|
||||||
|
from . import protocols
|
||||||
|
from . import sslproto
|
||||||
|
from . import transports
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorBasePipeTransport(transports._FlowControlMixin,
|
||||||
|
transports.BaseTransport):
|
||||||
|
"""Base class for pipe and socket transports."""
|
||||||
|
|
||||||
|
def __init__(self, loop, sock, protocol, waiter=None,
|
||||||
|
extra=None, server=None):
|
||||||
|
super().__init__(extra, loop)
|
||||||
|
self._set_extra(sock)
|
||||||
|
self._sock = sock
|
||||||
|
self.set_protocol(protocol)
|
||||||
|
self._server = server
|
||||||
|
self._buffer = None # None or bytearray.
|
||||||
|
self._read_fut = None
|
||||||
|
self._write_fut = None
|
||||||
|
self._pending_write = 0
|
||||||
|
self._conn_lost = 0
|
||||||
|
self._closing = False # Set when close() called.
|
||||||
|
self._eof_written = False
|
||||||
|
if self._server is not None:
|
||||||
|
self._server._attach()
|
||||||
|
self._loop.call_soon(self._protocol.connection_made, self)
|
||||||
|
if waiter is not None:
|
||||||
|
# only wake up the waiter when connection_made() has been called
|
||||||
|
self._loop.call_soon(futures._set_result_unless_cancelled,
|
||||||
|
waiter, None)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
info = [self.__class__.__name__]
|
||||||
|
if self._sock is None:
|
||||||
|
info.append('closed')
|
||||||
|
elif self._closing:
|
||||||
|
info.append('closing')
|
||||||
|
if self._sock is not None:
|
||||||
|
info.append(f'fd={self._sock.fileno()}')
|
||||||
|
if self._read_fut is not None:
|
||||||
|
info.append(f'read={self._read_fut!r}')
|
||||||
|
if self._write_fut is not None:
|
||||||
|
info.append(f'write={self._write_fut!r}')
|
||||||
|
if self._buffer:
|
||||||
|
info.append(f'write_bufsize={len(self._buffer)}')
|
||||||
|
if self._eof_written:
|
||||||
|
info.append('EOF written')
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
def _set_extra(self, sock):
|
||||||
|
self._extra['pipe'] = sock
|
||||||
|
|
||||||
|
def set_protocol(self, protocol):
|
||||||
|
self._protocol = protocol
|
||||||
|
|
||||||
|
def get_protocol(self):
|
||||||
|
return self._protocol
|
||||||
|
|
||||||
|
def is_closing(self):
|
||||||
|
return self._closing
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self._closing:
|
||||||
|
return
|
||||||
|
self._closing = True
|
||||||
|
self._conn_lost += 1
|
||||||
|
if not self._buffer and self._write_fut is None:
|
||||||
|
self._loop.call_soon(self._call_connection_lost, None)
|
||||||
|
if self._read_fut is not None:
|
||||||
|
self._read_fut.cancel()
|
||||||
|
self._read_fut = None
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if self._sock is not None:
|
||||||
|
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
|
||||||
|
source=self)
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
|
||||||
|
try:
|
||||||
|
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r: %s", self, message, exc_info=True)
|
||||||
|
else:
|
||||||
|
self._loop.call_exception_handler({
|
||||||
|
'message': message,
|
||||||
|
'exception': exc,
|
||||||
|
'transport': self,
|
||||||
|
'protocol': self._protocol,
|
||||||
|
})
|
||||||
|
finally:
|
||||||
|
self._force_close(exc)
|
||||||
|
|
||||||
|
def _force_close(self, exc):
|
||||||
|
if self._empty_waiter is not None:
|
||||||
|
if exc is None:
|
||||||
|
self._empty_waiter.set_result(None)
|
||||||
|
else:
|
||||||
|
self._empty_waiter.set_exception(exc)
|
||||||
|
if self._closing:
|
||||||
|
return
|
||||||
|
self._closing = True
|
||||||
|
self._conn_lost += 1
|
||||||
|
if self._write_fut:
|
||||||
|
self._write_fut.cancel()
|
||||||
|
self._write_fut = None
|
||||||
|
if self._read_fut:
|
||||||
|
self._read_fut.cancel()
|
||||||
|
self._read_fut = None
|
||||||
|
self._pending_write = 0
|
||||||
|
self._buffer = None
|
||||||
|
self._loop.call_soon(self._call_connection_lost, exc)
|
||||||
|
|
||||||
|
def _call_connection_lost(self, exc):
|
||||||
|
try:
|
||||||
|
self._protocol.connection_lost(exc)
|
||||||
|
finally:
|
||||||
|
# XXX If there is a pending overlapped read on the other
|
||||||
|
# end then it may fail with ERROR_NETNAME_DELETED if we
|
||||||
|
# just close our end. First calling shutdown() seems to
|
||||||
|
# cure it, but maybe using DisconnectEx() would be better.
|
||||||
|
if hasattr(self._sock, 'shutdown'):
|
||||||
|
self._sock.shutdown(socket.SHUT_RDWR)
|
||||||
|
self._sock.close()
|
||||||
|
self._sock = None
|
||||||
|
server = self._server
|
||||||
|
if server is not None:
|
||||||
|
server._detach()
|
||||||
|
self._server = None
|
||||||
|
|
||||||
|
def get_write_buffer_size(self):
|
||||||
|
size = self._pending_write
|
||||||
|
if self._buffer is not None:
|
||||||
|
size += len(self._buffer)
|
||||||
|
return size
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorReadPipeTransport(_ProactorBasePipeTransport,
|
||||||
|
transports.ReadTransport):
|
||||||
|
"""Transport for read pipes."""
|
||||||
|
|
||||||
|
def __init__(self, loop, sock, protocol, waiter=None,
|
||||||
|
extra=None, server=None):
|
||||||
|
self._pending_data = None
|
||||||
|
self._paused = True
|
||||||
|
super().__init__(loop, sock, protocol, waiter, extra, server)
|
||||||
|
|
||||||
|
self._loop.call_soon(self._loop_reading)
|
||||||
|
self._paused = False
|
||||||
|
|
||||||
|
def is_reading(self):
|
||||||
|
return not self._paused and not self._closing
|
||||||
|
|
||||||
|
def pause_reading(self):
|
||||||
|
if self._closing or self._paused:
|
||||||
|
return
|
||||||
|
self._paused = True
|
||||||
|
|
||||||
|
# bpo-33694: Don't cancel self._read_fut because cancelling an
|
||||||
|
# overlapped WSASend() loss silently data with the current proactor
|
||||||
|
# implementation.
|
||||||
|
#
|
||||||
|
# If CancelIoEx() fails with ERROR_NOT_FOUND, it means that WSASend()
|
||||||
|
# completed (even if HasOverlappedIoCompleted() returns 0), but
|
||||||
|
# Overlapped.cancel() currently silently ignores the ERROR_NOT_FOUND
|
||||||
|
# error. Once the overlapped is ignored, the IOCP loop will ignores the
|
||||||
|
# completion I/O event and so not read the result of the overlapped
|
||||||
|
# WSARecv().
|
||||||
|
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r pauses reading", self)
|
||||||
|
|
||||||
|
def resume_reading(self):
|
||||||
|
if self._closing or not self._paused:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._paused = False
|
||||||
|
if self._read_fut is None:
|
||||||
|
self._loop.call_soon(self._loop_reading, None)
|
||||||
|
|
||||||
|
data = self._pending_data
|
||||||
|
self._pending_data = None
|
||||||
|
if data is not None:
|
||||||
|
# Call the protocol methode after calling _loop_reading(),
|
||||||
|
# since the protocol can decide to pause reading again.
|
||||||
|
self._loop.call_soon(self._data_received, data)
|
||||||
|
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r resumes reading", self)
|
||||||
|
|
||||||
|
def _eof_received(self):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r received EOF", self)
|
||||||
|
|
||||||
|
try:
|
||||||
|
keep_open = self._protocol.eof_received()
|
||||||
|
except Exception as exc:
|
||||||
|
self._fatal_error(
|
||||||
|
exc, 'Fatal error: protocol.eof_received() call failed.')
|
||||||
|
return
|
||||||
|
|
||||||
|
if not keep_open:
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def _data_received(self, data):
|
||||||
|
if self._paused:
|
||||||
|
# Don't call any protocol method while reading is paused.
|
||||||
|
# The protocol will be called on resume_reading().
|
||||||
|
assert self._pending_data is None
|
||||||
|
self._pending_data = data
|
||||||
|
return
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
self._eof_received()
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(self._protocol, protocols.BufferedProtocol):
|
||||||
|
try:
|
||||||
|
protocols._feed_data_to_buffered_proto(self._protocol, data)
|
||||||
|
except Exception as exc:
|
||||||
|
self._fatal_error(exc,
|
||||||
|
'Fatal error: protocol.buffer_updated() '
|
||||||
|
'call failed.')
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
self._protocol.data_received(data)
|
||||||
|
|
||||||
|
def _loop_reading(self, fut=None):
|
||||||
|
data = None
|
||||||
|
try:
|
||||||
|
if fut is not None:
|
||||||
|
assert self._read_fut is fut or (self._read_fut is None and
|
||||||
|
self._closing)
|
||||||
|
self._read_fut = None
|
||||||
|
if fut.done():
|
||||||
|
# deliver data later in "finally" clause
|
||||||
|
data = fut.result()
|
||||||
|
else:
|
||||||
|
# the future will be replaced by next proactor.recv call
|
||||||
|
fut.cancel()
|
||||||
|
|
||||||
|
if self._closing:
|
||||||
|
# since close() has been called we ignore any read data
|
||||||
|
data = None
|
||||||
|
return
|
||||||
|
|
||||||
|
if data == b'':
|
||||||
|
# we got end-of-file so no need to reschedule a new read
|
||||||
|
return
|
||||||
|
|
||||||
|
# bpo-33694: buffer_updated() has currently no fast path because of
|
||||||
|
# a data loss issue caused by overlapped WSASend() cancellation.
|
||||||
|
|
||||||
|
if not self._paused:
|
||||||
|
# reschedule a new read
|
||||||
|
self._read_fut = self._loop._proactor.recv(self._sock, 32768)
|
||||||
|
except ConnectionAbortedError as exc:
|
||||||
|
if not self._closing:
|
||||||
|
self._fatal_error(exc, 'Fatal read error on pipe transport')
|
||||||
|
elif self._loop.get_debug():
|
||||||
|
logger.debug("Read error on pipe transport while closing",
|
||||||
|
exc_info=True)
|
||||||
|
except ConnectionResetError as exc:
|
||||||
|
self._force_close(exc)
|
||||||
|
except OSError as exc:
|
||||||
|
self._fatal_error(exc, 'Fatal read error on pipe transport')
|
||||||
|
except futures.CancelledError:
|
||||||
|
if not self._closing:
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
if not self._paused:
|
||||||
|
self._read_fut.add_done_callback(self._loop_reading)
|
||||||
|
finally:
|
||||||
|
if data is not None:
|
||||||
|
self._data_received(data)
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorBaseWritePipeTransport(_ProactorBasePipeTransport,
|
||||||
|
transports.WriteTransport):
|
||||||
|
"""Transport for write pipes."""
|
||||||
|
|
||||||
|
_start_tls_compatible = True
|
||||||
|
|
||||||
|
def __init__(self, *args, **kw):
|
||||||
|
super().__init__(*args, **kw)
|
||||||
|
self._empty_waiter = None
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||||
|
raise TypeError(
|
||||||
|
f"data argument must be a bytes-like object, "
|
||||||
|
f"not {type(data).__name__}")
|
||||||
|
if self._eof_written:
|
||||||
|
raise RuntimeError('write_eof() already called')
|
||||||
|
if self._empty_waiter is not None:
|
||||||
|
raise RuntimeError('unable to write; sendfile is in progress')
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self._conn_lost:
|
||||||
|
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
|
||||||
|
logger.warning('socket.send() raised exception.')
|
||||||
|
self._conn_lost += 1
|
||||||
|
return
|
||||||
|
|
||||||
|
# Observable states:
|
||||||
|
# 1. IDLE: _write_fut and _buffer both None
|
||||||
|
# 2. WRITING: _write_fut set; _buffer None
|
||||||
|
# 3. BACKED UP: _write_fut set; _buffer a bytearray
|
||||||
|
# We always copy the data, so the caller can't modify it
|
||||||
|
# while we're still waiting for the I/O to happen.
|
||||||
|
if self._write_fut is None: # IDLE -> WRITING
|
||||||
|
assert self._buffer is None
|
||||||
|
# Pass a copy, except if it's already immutable.
|
||||||
|
self._loop_writing(data=bytes(data))
|
||||||
|
elif not self._buffer: # WRITING -> BACKED UP
|
||||||
|
# Make a mutable copy which we can extend.
|
||||||
|
self._buffer = bytearray(data)
|
||||||
|
self._maybe_pause_protocol()
|
||||||
|
else: # BACKED UP
|
||||||
|
# Append to buffer (also copies).
|
||||||
|
self._buffer.extend(data)
|
||||||
|
self._maybe_pause_protocol()
|
||||||
|
|
||||||
|
def _loop_writing(self, f=None, data=None):
|
||||||
|
try:
|
||||||
|
if f is not None and self._write_fut is None and self._closing:
|
||||||
|
# XXX most likely self._force_close() has been called, and
|
||||||
|
# it has set self._write_fut to None.
|
||||||
|
return
|
||||||
|
assert f is self._write_fut
|
||||||
|
self._write_fut = None
|
||||||
|
self._pending_write = 0
|
||||||
|
if f:
|
||||||
|
f.result()
|
||||||
|
if data is None:
|
||||||
|
data = self._buffer
|
||||||
|
self._buffer = None
|
||||||
|
if not data:
|
||||||
|
if self._closing:
|
||||||
|
self._loop.call_soon(self._call_connection_lost, None)
|
||||||
|
if self._eof_written:
|
||||||
|
self._sock.shutdown(socket.SHUT_WR)
|
||||||
|
# Now that we've reduced the buffer size, tell the
|
||||||
|
# protocol to resume writing if it was paused. Note that
|
||||||
|
# we do this last since the callback is called immediately
|
||||||
|
# and it may add more data to the buffer (even causing the
|
||||||
|
# protocol to be paused again).
|
||||||
|
self._maybe_resume_protocol()
|
||||||
|
else:
|
||||||
|
self._write_fut = self._loop._proactor.send(self._sock, data)
|
||||||
|
if not self._write_fut.done():
|
||||||
|
assert self._pending_write == 0
|
||||||
|
self._pending_write = len(data)
|
||||||
|
self._write_fut.add_done_callback(self._loop_writing)
|
||||||
|
self._maybe_pause_protocol()
|
||||||
|
else:
|
||||||
|
self._write_fut.add_done_callback(self._loop_writing)
|
||||||
|
if self._empty_waiter is not None and self._write_fut is None:
|
||||||
|
self._empty_waiter.set_result(None)
|
||||||
|
except ConnectionResetError as exc:
|
||||||
|
self._force_close(exc)
|
||||||
|
except OSError as exc:
|
||||||
|
self._fatal_error(exc, 'Fatal write error on pipe transport')
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def write_eof(self):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def abort(self):
|
||||||
|
self._force_close(None)
|
||||||
|
|
||||||
|
def _make_empty_waiter(self):
|
||||||
|
if self._empty_waiter is not None:
|
||||||
|
raise RuntimeError("Empty waiter is already set")
|
||||||
|
self._empty_waiter = self._loop.create_future()
|
||||||
|
if self._write_fut is None:
|
||||||
|
self._empty_waiter.set_result(None)
|
||||||
|
return self._empty_waiter
|
||||||
|
|
||||||
|
def _reset_empty_waiter(self):
|
||||||
|
self._empty_waiter = None
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorWritePipeTransport(_ProactorBaseWritePipeTransport):
|
||||||
|
def __init__(self, *args, **kw):
|
||||||
|
super().__init__(*args, **kw)
|
||||||
|
self._read_fut = self._loop._proactor.recv(self._sock, 16)
|
||||||
|
self._read_fut.add_done_callback(self._pipe_closed)
|
||||||
|
|
||||||
|
def _pipe_closed(self, fut):
|
||||||
|
if fut.cancelled():
|
||||||
|
# the transport has been closed
|
||||||
|
return
|
||||||
|
assert fut.result() == b''
|
||||||
|
if self._closing:
|
||||||
|
assert self._read_fut is None
|
||||||
|
return
|
||||||
|
assert fut is self._read_fut, (fut, self._read_fut)
|
||||||
|
self._read_fut = None
|
||||||
|
if self._write_fut is not None:
|
||||||
|
self._force_close(BrokenPipeError())
|
||||||
|
else:
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorDuplexPipeTransport(_ProactorReadPipeTransport,
|
||||||
|
_ProactorBaseWritePipeTransport,
|
||||||
|
transports.Transport):
|
||||||
|
"""Transport for duplex pipes."""
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def write_eof(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class _ProactorSocketTransport(_ProactorReadPipeTransport,
|
||||||
|
_ProactorBaseWritePipeTransport,
|
||||||
|
transports.Transport):
|
||||||
|
"""Transport for connected sockets."""
|
||||||
|
|
||||||
|
_sendfile_compatible = constants._SendfileMode.TRY_NATIVE
|
||||||
|
|
||||||
|
def __init__(self, loop, sock, protocol, waiter=None,
|
||||||
|
extra=None, server=None):
|
||||||
|
super().__init__(loop, sock, protocol, waiter, extra, server)
|
||||||
|
base_events._set_nodelay(sock)
|
||||||
|
|
||||||
|
def _set_extra(self, sock):
|
||||||
|
self._extra['socket'] = sock
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._extra['sockname'] = sock.getsockname()
|
||||||
|
except (socket.error, AttributeError):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.warning(
|
||||||
|
"getsockname() failed on %r", sock, exc_info=True)
|
||||||
|
|
||||||
|
if 'peername' not in self._extra:
|
||||||
|
try:
|
||||||
|
self._extra['peername'] = sock.getpeername()
|
||||||
|
except (socket.error, AttributeError):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.warning("getpeername() failed on %r",
|
||||||
|
sock, exc_info=True)
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def write_eof(self):
|
||||||
|
if self._closing or self._eof_written:
|
||||||
|
return
|
||||||
|
self._eof_written = True
|
||||||
|
if self._write_fut is None:
|
||||||
|
self._sock.shutdown(socket.SHUT_WR)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseProactorEventLoop(base_events.BaseEventLoop):
|
||||||
|
|
||||||
|
def __init__(self, proactor):
|
||||||
|
super().__init__()
|
||||||
|
logger.debug('Using proactor: %s', proactor.__class__.__name__)
|
||||||
|
self._proactor = proactor
|
||||||
|
self._selector = proactor # convenient alias
|
||||||
|
self._self_reading_future = None
|
||||||
|
self._accept_futures = {} # socket file descriptor => Future
|
||||||
|
proactor.set_loop(self)
|
||||||
|
self._make_self_pipe()
|
||||||
|
|
||||||
|
def _make_socket_transport(self, sock, protocol, waiter=None,
|
||||||
|
extra=None, server=None):
|
||||||
|
return _ProactorSocketTransport(self, sock, protocol, waiter,
|
||||||
|
extra, server)
|
||||||
|
|
||||||
|
def _make_ssl_transport(
|
||||||
|
self, rawsock, protocol, sslcontext, waiter=None,
|
||||||
|
*, server_side=False, server_hostname=None,
|
||||||
|
extra=None, server=None,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
ssl_protocol = sslproto.SSLProtocol(
|
||||||
|
self, protocol, sslcontext, waiter,
|
||||||
|
server_side, server_hostname,
|
||||||
|
ssl_handshake_timeout=ssl_handshake_timeout)
|
||||||
|
_ProactorSocketTransport(self, rawsock, ssl_protocol,
|
||||||
|
extra=extra, server=server)
|
||||||
|
return ssl_protocol._app_transport
|
||||||
|
|
||||||
|
def _make_duplex_pipe_transport(self, sock, protocol, waiter=None,
|
||||||
|
extra=None):
|
||||||
|
return _ProactorDuplexPipeTransport(self,
|
||||||
|
sock, protocol, waiter, extra)
|
||||||
|
|
||||||
|
def _make_read_pipe_transport(self, sock, protocol, waiter=None,
|
||||||
|
extra=None):
|
||||||
|
return _ProactorReadPipeTransport(self, sock, protocol, waiter, extra)
|
||||||
|
|
||||||
|
def _make_write_pipe_transport(self, sock, protocol, waiter=None,
|
||||||
|
extra=None):
|
||||||
|
# We want connection_lost() to be called when other end closes
|
||||||
|
return _ProactorWritePipeTransport(self,
|
||||||
|
sock, protocol, waiter, extra)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.is_running():
|
||||||
|
raise RuntimeError("Cannot close a running event loop")
|
||||||
|
if self.is_closed():
|
||||||
|
return
|
||||||
|
|
||||||
|
# Call these methods before closing the event loop (before calling
|
||||||
|
# BaseEventLoop.close), because they can schedule callbacks with
|
||||||
|
# call_soon(), which is forbidden when the event loop is closed.
|
||||||
|
self._stop_accept_futures()
|
||||||
|
self._close_self_pipe()
|
||||||
|
self._proactor.close()
|
||||||
|
self._proactor = None
|
||||||
|
self._selector = None
|
||||||
|
|
||||||
|
# Close the event loop
|
||||||
|
super().close()
|
||||||
|
|
||||||
|
async def sock_recv(self, sock, n):
|
||||||
|
return await self._proactor.recv(sock, n)
|
||||||
|
|
||||||
|
async def sock_recv_into(self, sock, buf):
|
||||||
|
return await self._proactor.recv_into(sock, buf)
|
||||||
|
|
||||||
|
async def sock_sendall(self, sock, data):
|
||||||
|
return await self._proactor.send(sock, data)
|
||||||
|
|
||||||
|
async def sock_connect(self, sock, address):
|
||||||
|
return await self._proactor.connect(sock, address)
|
||||||
|
|
||||||
|
async def sock_accept(self, sock):
|
||||||
|
return await self._proactor.accept(sock)
|
||||||
|
|
||||||
|
async def _sock_sendfile_native(self, sock, file, offset, count):
|
||||||
|
try:
|
||||||
|
fileno = file.fileno()
|
||||||
|
except (AttributeError, io.UnsupportedOperation) as err:
|
||||||
|
raise events.SendfileNotAvailableError("not a regular file")
|
||||||
|
try:
|
||||||
|
fsize = os.fstat(fileno).st_size
|
||||||
|
except OSError as err:
|
||||||
|
raise events.SendfileNotAvailableError("not a regular file")
|
||||||
|
blocksize = count if count else fsize
|
||||||
|
if not blocksize:
|
||||||
|
return 0 # empty file
|
||||||
|
|
||||||
|
blocksize = min(blocksize, 0xffff_ffff)
|
||||||
|
end_pos = min(offset + count, fsize) if count else fsize
|
||||||
|
offset = min(offset, fsize)
|
||||||
|
total_sent = 0
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
blocksize = min(end_pos - offset, blocksize)
|
||||||
|
if blocksize <= 0:
|
||||||
|
return total_sent
|
||||||
|
await self._proactor.sendfile(sock, file, offset, blocksize)
|
||||||
|
offset += blocksize
|
||||||
|
total_sent += blocksize
|
||||||
|
finally:
|
||||||
|
if total_sent > 0:
|
||||||
|
file.seek(offset)
|
||||||
|
|
||||||
|
async def _sendfile_native(self, transp, file, offset, count):
|
||||||
|
resume_reading = transp.is_reading()
|
||||||
|
transp.pause_reading()
|
||||||
|
await transp._make_empty_waiter()
|
||||||
|
try:
|
||||||
|
return await self.sock_sendfile(transp._sock, file, offset, count,
|
||||||
|
fallback=False)
|
||||||
|
finally:
|
||||||
|
transp._reset_empty_waiter()
|
||||||
|
if resume_reading:
|
||||||
|
transp.resume_reading()
|
||||||
|
|
||||||
|
def _close_self_pipe(self):
|
||||||
|
if self._self_reading_future is not None:
|
||||||
|
self._self_reading_future.cancel()
|
||||||
|
self._self_reading_future = None
|
||||||
|
self._ssock.close()
|
||||||
|
self._ssock = None
|
||||||
|
self._csock.close()
|
||||||
|
self._csock = None
|
||||||
|
self._internal_fds -= 1
|
||||||
|
|
||||||
|
def _make_self_pipe(self):
|
||||||
|
# A self-socket, really. :-)
|
||||||
|
self._ssock, self._csock = socket.socketpair()
|
||||||
|
self._ssock.setblocking(False)
|
||||||
|
self._csock.setblocking(False)
|
||||||
|
self._internal_fds += 1
|
||||||
|
self.call_soon(self._loop_self_reading)
|
||||||
|
|
||||||
|
def _loop_self_reading(self, f=None):
|
||||||
|
try:
|
||||||
|
if f is not None:
|
||||||
|
f.result() # may raise
|
||||||
|
f = self._proactor.recv(self._ssock, 4096)
|
||||||
|
except futures.CancelledError:
|
||||||
|
# _close_self_pipe() has been called, stop waiting for data
|
||||||
|
return
|
||||||
|
except Exception as exc:
|
||||||
|
self.call_exception_handler({
|
||||||
|
'message': 'Error on reading from the event loop self pipe',
|
||||||
|
'exception': exc,
|
||||||
|
'loop': self,
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
self._self_reading_future = f
|
||||||
|
f.add_done_callback(self._loop_self_reading)
|
||||||
|
|
||||||
|
def _write_to_self(self):
|
||||||
|
self._csock.send(b'\0')
|
||||||
|
|
||||||
|
def _start_serving(self, protocol_factory, sock,
|
||||||
|
sslcontext=None, server=None, backlog=100,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
|
||||||
|
def loop(f=None):
|
||||||
|
try:
|
||||||
|
if f is not None:
|
||||||
|
conn, addr = f.result()
|
||||||
|
if self._debug:
|
||||||
|
logger.debug("%r got a new connection from %r: %r",
|
||||||
|
server, addr, conn)
|
||||||
|
protocol = protocol_factory()
|
||||||
|
if sslcontext is not None:
|
||||||
|
self._make_ssl_transport(
|
||||||
|
conn, protocol, sslcontext, server_side=True,
|
||||||
|
extra={'peername': addr}, server=server,
|
||||||
|
ssl_handshake_timeout=ssl_handshake_timeout)
|
||||||
|
else:
|
||||||
|
self._make_socket_transport(
|
||||||
|
conn, protocol,
|
||||||
|
extra={'peername': addr}, server=server)
|
||||||
|
if self.is_closed():
|
||||||
|
return
|
||||||
|
f = self._proactor.accept(sock)
|
||||||
|
except OSError as exc:
|
||||||
|
if sock.fileno() != -1:
|
||||||
|
self.call_exception_handler({
|
||||||
|
'message': 'Accept failed on a socket',
|
||||||
|
'exception': exc,
|
||||||
|
'socket': sock,
|
||||||
|
})
|
||||||
|
sock.close()
|
||||||
|
elif self._debug:
|
||||||
|
logger.debug("Accept failed on socket %r",
|
||||||
|
sock, exc_info=True)
|
||||||
|
except futures.CancelledError:
|
||||||
|
sock.close()
|
||||||
|
else:
|
||||||
|
self._accept_futures[sock.fileno()] = f
|
||||||
|
f.add_done_callback(loop)
|
||||||
|
|
||||||
|
self.call_soon(loop)
|
||||||
|
|
||||||
|
def _process_events(self, event_list):
|
||||||
|
# Events are processed in the IocpProactor._poll() method
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _stop_accept_futures(self):
|
||||||
|
for future in self._accept_futures.values():
|
||||||
|
future.cancel()
|
||||||
|
self._accept_futures.clear()
|
||||||
|
|
||||||
|
def _stop_serving(self, sock):
|
||||||
|
future = self._accept_futures.pop(sock.fileno(), None)
|
||||||
|
if future:
|
||||||
|
future.cancel()
|
||||||
|
self._proactor._stop_serving(sock)
|
||||||
|
sock.close()
|
210
Lib/asyncio/protocols.py
Normal file
210
Lib/asyncio/protocols.py
Normal file
|
@ -0,0 +1,210 @@
|
||||||
|
"""Abstract Protocol base classes."""
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'BaseProtocol', 'Protocol', 'DatagramProtocol',
|
||||||
|
'SubprocessProtocol', 'BufferedProtocol',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseProtocol:
|
||||||
|
"""Common base class for protocol interfaces.
|
||||||
|
|
||||||
|
Usually user implements protocols that derived from BaseProtocol
|
||||||
|
like Protocol or ProcessProtocol.
|
||||||
|
|
||||||
|
The only case when BaseProtocol should be implemented directly is
|
||||||
|
write-only transport like write pipe
|
||||||
|
"""
|
||||||
|
|
||||||
|
def connection_made(self, transport):
|
||||||
|
"""Called when a connection is made.
|
||||||
|
|
||||||
|
The argument is the transport representing the pipe connection.
|
||||||
|
To receive data, wait for data_received() calls.
|
||||||
|
When the connection is closed, connection_lost() is called.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def connection_lost(self, exc):
|
||||||
|
"""Called when the connection is lost or closed.
|
||||||
|
|
||||||
|
The argument is an exception object or None (the latter
|
||||||
|
meaning a regular EOF is received or the connection was
|
||||||
|
aborted or closed).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def pause_writing(self):
|
||||||
|
"""Called when the transport's buffer goes over the high-water mark.
|
||||||
|
|
||||||
|
Pause and resume calls are paired -- pause_writing() is called
|
||||||
|
once when the buffer goes strictly over the high-water mark
|
||||||
|
(even if subsequent writes increases the buffer size even
|
||||||
|
more), and eventually resume_writing() is called once when the
|
||||||
|
buffer size reaches the low-water mark.
|
||||||
|
|
||||||
|
Note that if the buffer size equals the high-water mark,
|
||||||
|
pause_writing() is not called -- it must go strictly over.
|
||||||
|
Conversely, resume_writing() is called when the buffer size is
|
||||||
|
equal or lower than the low-water mark. These end conditions
|
||||||
|
are important to ensure that things go as expected when either
|
||||||
|
mark is zero.
|
||||||
|
|
||||||
|
NOTE: This is the only Protocol callback that is not called
|
||||||
|
through EventLoop.call_soon() -- if it were, it would have no
|
||||||
|
effect when it's most needed (when the app keeps writing
|
||||||
|
without yielding until pause_writing() is called).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def resume_writing(self):
|
||||||
|
"""Called when the transport's buffer drains below the low-water mark.
|
||||||
|
|
||||||
|
See pause_writing() for details.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class Protocol(BaseProtocol):
|
||||||
|
"""Interface for stream protocol.
|
||||||
|
|
||||||
|
The user should implement this interface. They can inherit from
|
||||||
|
this class but don't need to. The implementations here do
|
||||||
|
nothing (they don't raise exceptions).
|
||||||
|
|
||||||
|
When the user wants to requests a transport, they pass a protocol
|
||||||
|
factory to a utility function (e.g., EventLoop.create_connection()).
|
||||||
|
|
||||||
|
When the connection is made successfully, connection_made() is
|
||||||
|
called with a suitable transport object. Then data_received()
|
||||||
|
will be called 0 or more times with data (bytes) received from the
|
||||||
|
transport; finally, connection_lost() will be called exactly once
|
||||||
|
with either an exception object or None as an argument.
|
||||||
|
|
||||||
|
State machine of calls:
|
||||||
|
|
||||||
|
start -> CM [-> DR*] [-> ER?] -> CL -> end
|
||||||
|
|
||||||
|
* CM: connection_made()
|
||||||
|
* DR: data_received()
|
||||||
|
* ER: eof_received()
|
||||||
|
* CL: connection_lost()
|
||||||
|
"""
|
||||||
|
|
||||||
|
def data_received(self, data):
|
||||||
|
"""Called when some data is received.
|
||||||
|
|
||||||
|
The argument is a bytes object.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def eof_received(self):
|
||||||
|
"""Called when the other end calls write_eof() or equivalent.
|
||||||
|
|
||||||
|
If this returns a false value (including None), the transport
|
||||||
|
will close itself. If it returns a true value, closing the
|
||||||
|
transport is up to the protocol.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class BufferedProtocol(BaseProtocol):
|
||||||
|
"""Interface for stream protocol with manual buffer control.
|
||||||
|
|
||||||
|
Important: this has been added to asyncio in Python 3.7
|
||||||
|
*on a provisional basis*! Consider it as an experimental API that
|
||||||
|
might be changed or removed in Python 3.8.
|
||||||
|
|
||||||
|
Event methods, such as `create_server` and `create_connection`,
|
||||||
|
accept factories that return protocols that implement this interface.
|
||||||
|
|
||||||
|
The idea of BufferedProtocol is that it allows to manually allocate
|
||||||
|
and control the receive buffer. Event loops can then use the buffer
|
||||||
|
provided by the protocol to avoid unnecessary data copies. This
|
||||||
|
can result in noticeable performance improvement for protocols that
|
||||||
|
receive big amounts of data. Sophisticated protocols can allocate
|
||||||
|
the buffer only once at creation time.
|
||||||
|
|
||||||
|
State machine of calls:
|
||||||
|
|
||||||
|
start -> CM [-> GB [-> BU?]]* [-> ER?] -> CL -> end
|
||||||
|
|
||||||
|
* CM: connection_made()
|
||||||
|
* GB: get_buffer()
|
||||||
|
* BU: buffer_updated()
|
||||||
|
* ER: eof_received()
|
||||||
|
* CL: connection_lost()
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_buffer(self, sizehint):
|
||||||
|
"""Called to allocate a new receive buffer.
|
||||||
|
|
||||||
|
*sizehint* is a recommended minimal size for the returned
|
||||||
|
buffer. When set to -1, the buffer size can be arbitrary.
|
||||||
|
|
||||||
|
Must return an object that implements the
|
||||||
|
:ref:`buffer protocol <bufferobjects>`.
|
||||||
|
It is an error to return a zero-sized buffer.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def buffer_updated(self, nbytes):
|
||||||
|
"""Called when the buffer was updated with the received data.
|
||||||
|
|
||||||
|
*nbytes* is the total number of bytes that were written to
|
||||||
|
the buffer.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def eof_received(self):
|
||||||
|
"""Called when the other end calls write_eof() or equivalent.
|
||||||
|
|
||||||
|
If this returns a false value (including None), the transport
|
||||||
|
will close itself. If it returns a true value, closing the
|
||||||
|
transport is up to the protocol.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class DatagramProtocol(BaseProtocol):
|
||||||
|
"""Interface for datagram protocol."""
|
||||||
|
|
||||||
|
def datagram_received(self, data, addr):
|
||||||
|
"""Called when some datagram is received."""
|
||||||
|
|
||||||
|
def error_received(self, exc):
|
||||||
|
"""Called when a send or receive operation raises an OSError.
|
||||||
|
|
||||||
|
(Other than BlockingIOError or InterruptedError.)
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class SubprocessProtocol(BaseProtocol):
|
||||||
|
"""Interface for protocol for subprocess calls."""
|
||||||
|
|
||||||
|
def pipe_data_received(self, fd, data):
|
||||||
|
"""Called when the subprocess writes data into stdout/stderr pipe.
|
||||||
|
|
||||||
|
fd is int file descriptor.
|
||||||
|
data is bytes object.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def pipe_connection_lost(self, fd, exc):
|
||||||
|
"""Called when a file descriptor associated with the child process is
|
||||||
|
closed.
|
||||||
|
|
||||||
|
fd is the int file descriptor that was closed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def process_exited(self):
|
||||||
|
"""Called when subprocess has exited."""
|
||||||
|
|
||||||
|
|
||||||
|
def _feed_data_to_buffered_proto(proto, data):
|
||||||
|
data_len = len(data)
|
||||||
|
while data_len:
|
||||||
|
buf = proto.get_buffer(data_len)
|
||||||
|
buf_len = len(buf)
|
||||||
|
if not buf_len:
|
||||||
|
raise RuntimeError('get_buffer() returned an empty buffer')
|
||||||
|
|
||||||
|
if buf_len >= data_len:
|
||||||
|
buf[:data_len] = data
|
||||||
|
proto.buffer_updated(data_len)
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
buf[:buf_len] = data[:buf_len]
|
||||||
|
proto.buffer_updated(buf_len)
|
||||||
|
data = data[buf_len:]
|
||||||
|
data_len = len(data)
|
245
Lib/asyncio/queues.py
Normal file
245
Lib/asyncio/queues.py
Normal file
|
@ -0,0 +1,245 @@
|
||||||
|
__all__ = ('Queue', 'PriorityQueue', 'LifoQueue', 'QueueFull', 'QueueEmpty')
|
||||||
|
|
||||||
|
import collections
|
||||||
|
import heapq
|
||||||
|
|
||||||
|
from . import events
|
||||||
|
from . import locks
|
||||||
|
|
||||||
|
|
||||||
|
class QueueEmpty(Exception):
|
||||||
|
"""Raised when Queue.get_nowait() is called on an empty Queue."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class QueueFull(Exception):
|
||||||
|
"""Raised when the Queue.put_nowait() method is called on a full Queue."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class Queue:
|
||||||
|
"""A queue, useful for coordinating producer and consumer coroutines.
|
||||||
|
|
||||||
|
If maxsize is less than or equal to zero, the queue size is infinite. If it
|
||||||
|
is an integer greater than 0, then "await put()" will block when the
|
||||||
|
queue reaches maxsize, until an item is removed by get().
|
||||||
|
|
||||||
|
Unlike the standard library Queue, you can reliably know this Queue's size
|
||||||
|
with qsize(), since your single-threaded asyncio application won't be
|
||||||
|
interrupted between calling qsize() and doing an operation on the Queue.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, maxsize=0, *, loop=None):
|
||||||
|
if loop is None:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
else:
|
||||||
|
self._loop = loop
|
||||||
|
self._maxsize = maxsize
|
||||||
|
|
||||||
|
# Futures.
|
||||||
|
self._getters = collections.deque()
|
||||||
|
# Futures.
|
||||||
|
self._putters = collections.deque()
|
||||||
|
self._unfinished_tasks = 0
|
||||||
|
self._finished = locks.Event(loop=self._loop)
|
||||||
|
self._finished.set()
|
||||||
|
self._init(maxsize)
|
||||||
|
|
||||||
|
# These three are overridable in subclasses.
|
||||||
|
|
||||||
|
def _init(self, maxsize):
|
||||||
|
self._queue = collections.deque()
|
||||||
|
|
||||||
|
def _get(self):
|
||||||
|
return self._queue.popleft()
|
||||||
|
|
||||||
|
def _put(self, item):
|
||||||
|
self._queue.append(item)
|
||||||
|
|
||||||
|
# End of the overridable methods.
|
||||||
|
|
||||||
|
def _wakeup_next(self, waiters):
|
||||||
|
# Wake up the next waiter (if any) that isn't cancelled.
|
||||||
|
while waiters:
|
||||||
|
waiter = waiters.popleft()
|
||||||
|
if not waiter.done():
|
||||||
|
waiter.set_result(None)
|
||||||
|
break
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f'<{type(self).__name__} at {id(self):#x} {self._format()}>'
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return f'<{type(self).__name__} {self._format()}>'
|
||||||
|
|
||||||
|
def _format(self):
|
||||||
|
result = f'maxsize={self._maxsize!r}'
|
||||||
|
if getattr(self, '_queue', None):
|
||||||
|
result += f' _queue={list(self._queue)!r}'
|
||||||
|
if self._getters:
|
||||||
|
result += f' _getters[{len(self._getters)}]'
|
||||||
|
if self._putters:
|
||||||
|
result += f' _putters[{len(self._putters)}]'
|
||||||
|
if self._unfinished_tasks:
|
||||||
|
result += f' tasks={self._unfinished_tasks}'
|
||||||
|
return result
|
||||||
|
|
||||||
|
def qsize(self):
|
||||||
|
"""Number of items in the queue."""
|
||||||
|
return len(self._queue)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def maxsize(self):
|
||||||
|
"""Number of items allowed in the queue."""
|
||||||
|
return self._maxsize
|
||||||
|
|
||||||
|
def empty(self):
|
||||||
|
"""Return True if the queue is empty, False otherwise."""
|
||||||
|
return not self._queue
|
||||||
|
|
||||||
|
def full(self):
|
||||||
|
"""Return True if there are maxsize items in the queue.
|
||||||
|
|
||||||
|
Note: if the Queue was initialized with maxsize=0 (the default),
|
||||||
|
then full() is never True.
|
||||||
|
"""
|
||||||
|
if self._maxsize <= 0:
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
return self.qsize() >= self._maxsize
|
||||||
|
|
||||||
|
async def put(self, item):
|
||||||
|
"""Put an item into the queue.
|
||||||
|
|
||||||
|
Put an item into the queue. If the queue is full, wait until a free
|
||||||
|
slot is available before adding item.
|
||||||
|
"""
|
||||||
|
while self.full():
|
||||||
|
putter = self._loop.create_future()
|
||||||
|
self._putters.append(putter)
|
||||||
|
try:
|
||||||
|
await putter
|
||||||
|
except:
|
||||||
|
putter.cancel() # Just in case putter is not done yet.
|
||||||
|
try:
|
||||||
|
# Clean self._putters from canceled putters.
|
||||||
|
self._putters.remove(putter)
|
||||||
|
except ValueError:
|
||||||
|
# The putter could be removed from self._putters by a
|
||||||
|
# previous get_nowait call.
|
||||||
|
pass
|
||||||
|
if not self.full() and not putter.cancelled():
|
||||||
|
# We were woken up by get_nowait(), but can't take
|
||||||
|
# the call. Wake up the next in line.
|
||||||
|
self._wakeup_next(self._putters)
|
||||||
|
raise
|
||||||
|
return self.put_nowait(item)
|
||||||
|
|
||||||
|
def put_nowait(self, item):
|
||||||
|
"""Put an item into the queue without blocking.
|
||||||
|
|
||||||
|
If no free slot is immediately available, raise QueueFull.
|
||||||
|
"""
|
||||||
|
if self.full():
|
||||||
|
raise QueueFull
|
||||||
|
self._put(item)
|
||||||
|
self._unfinished_tasks += 1
|
||||||
|
self._finished.clear()
|
||||||
|
self._wakeup_next(self._getters)
|
||||||
|
|
||||||
|
async def get(self):
|
||||||
|
"""Remove and return an item from the queue.
|
||||||
|
|
||||||
|
If queue is empty, wait until an item is available.
|
||||||
|
"""
|
||||||
|
while self.empty():
|
||||||
|
getter = self._loop.create_future()
|
||||||
|
self._getters.append(getter)
|
||||||
|
try:
|
||||||
|
await getter
|
||||||
|
except:
|
||||||
|
getter.cancel() # Just in case getter is not done yet.
|
||||||
|
try:
|
||||||
|
# Clean self._getters from canceled getters.
|
||||||
|
self._getters.remove(getter)
|
||||||
|
except ValueError:
|
||||||
|
# The getter could be removed from self._getters by a
|
||||||
|
# previous put_nowait call.
|
||||||
|
pass
|
||||||
|
if not self.empty() and not getter.cancelled():
|
||||||
|
# We were woken up by put_nowait(), but can't take
|
||||||
|
# the call. Wake up the next in line.
|
||||||
|
self._wakeup_next(self._getters)
|
||||||
|
raise
|
||||||
|
return self.get_nowait()
|
||||||
|
|
||||||
|
def get_nowait(self):
|
||||||
|
"""Remove and return an item from the queue.
|
||||||
|
|
||||||
|
Return an item if one is immediately available, else raise QueueEmpty.
|
||||||
|
"""
|
||||||
|
if self.empty():
|
||||||
|
raise QueueEmpty
|
||||||
|
item = self._get()
|
||||||
|
self._wakeup_next(self._putters)
|
||||||
|
return item
|
||||||
|
|
||||||
|
def task_done(self):
|
||||||
|
"""Indicate that a formerly enqueued task is complete.
|
||||||
|
|
||||||
|
Used by queue consumers. For each get() used to fetch a task,
|
||||||
|
a subsequent call to task_done() tells the queue that the processing
|
||||||
|
on the task is complete.
|
||||||
|
|
||||||
|
If a join() is currently blocking, it will resume when all items have
|
||||||
|
been processed (meaning that a task_done() call was received for every
|
||||||
|
item that had been put() into the queue).
|
||||||
|
|
||||||
|
Raises ValueError if called more times than there were items placed in
|
||||||
|
the queue.
|
||||||
|
"""
|
||||||
|
if self._unfinished_tasks <= 0:
|
||||||
|
raise ValueError('task_done() called too many times')
|
||||||
|
self._unfinished_tasks -= 1
|
||||||
|
if self._unfinished_tasks == 0:
|
||||||
|
self._finished.set()
|
||||||
|
|
||||||
|
async def join(self):
|
||||||
|
"""Block until all items in the queue have been gotten and processed.
|
||||||
|
|
||||||
|
The count of unfinished tasks goes up whenever an item is added to the
|
||||||
|
queue. The count goes down whenever a consumer calls task_done() to
|
||||||
|
indicate that the item was retrieved and all work on it is complete.
|
||||||
|
When the count of unfinished tasks drops to zero, join() unblocks.
|
||||||
|
"""
|
||||||
|
if self._unfinished_tasks > 0:
|
||||||
|
await self._finished.wait()
|
||||||
|
|
||||||
|
|
||||||
|
class PriorityQueue(Queue):
|
||||||
|
"""A subclass of Queue; retrieves entries in priority order (lowest first).
|
||||||
|
|
||||||
|
Entries are typically tuples of the form: (priority number, data).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _init(self, maxsize):
|
||||||
|
self._queue = []
|
||||||
|
|
||||||
|
def _put(self, item, heappush=heapq.heappush):
|
||||||
|
heappush(self._queue, item)
|
||||||
|
|
||||||
|
def _get(self, heappop=heapq.heappop):
|
||||||
|
return heappop(self._queue)
|
||||||
|
|
||||||
|
|
||||||
|
class LifoQueue(Queue):
|
||||||
|
"""A subclass of Queue that retrieves most recently added entries first."""
|
||||||
|
|
||||||
|
def _init(self, maxsize):
|
||||||
|
self._queue = []
|
||||||
|
|
||||||
|
def _put(self, item):
|
||||||
|
self._queue.append(item)
|
||||||
|
|
||||||
|
def _get(self):
|
||||||
|
return self._queue.pop()
|
72
Lib/asyncio/runners.py
Normal file
72
Lib/asyncio/runners.py
Normal file
|
@ -0,0 +1,72 @@
|
||||||
|
__all__ = 'run',
|
||||||
|
|
||||||
|
from . import coroutines
|
||||||
|
from . import events
|
||||||
|
from . import tasks
|
||||||
|
|
||||||
|
|
||||||
|
def run(main, *, debug=False):
|
||||||
|
"""Run a coroutine.
|
||||||
|
|
||||||
|
This function runs the passed coroutine, taking care of
|
||||||
|
managing the asyncio event loop and finalizing asynchronous
|
||||||
|
generators.
|
||||||
|
|
||||||
|
This function cannot be called when another asyncio event loop is
|
||||||
|
running in the same thread.
|
||||||
|
|
||||||
|
If debug is True, the event loop will be run in debug mode.
|
||||||
|
|
||||||
|
This function always creates a new event loop and closes it at the end.
|
||||||
|
It should be used as a main entry point for asyncio programs, and should
|
||||||
|
ideally only be called once.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
print('hello')
|
||||||
|
|
||||||
|
asyncio.run(main())
|
||||||
|
"""
|
||||||
|
if events._get_running_loop() is not None:
|
||||||
|
raise RuntimeError(
|
||||||
|
"asyncio.run() cannot be called from a running event loop")
|
||||||
|
|
||||||
|
if not coroutines.iscoroutine(main):
|
||||||
|
raise ValueError("a coroutine was expected, got {!r}".format(main))
|
||||||
|
|
||||||
|
loop = events.new_event_loop()
|
||||||
|
try:
|
||||||
|
events.set_event_loop(loop)
|
||||||
|
loop.set_debug(debug)
|
||||||
|
return loop.run_until_complete(main)
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
_cancel_all_tasks(loop)
|
||||||
|
loop.run_until_complete(loop.shutdown_asyncgens())
|
||||||
|
finally:
|
||||||
|
events.set_event_loop(None)
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
|
||||||
|
def _cancel_all_tasks(loop):
|
||||||
|
to_cancel = tasks.all_tasks(loop)
|
||||||
|
if not to_cancel:
|
||||||
|
return
|
||||||
|
|
||||||
|
for task in to_cancel:
|
||||||
|
task.cancel()
|
||||||
|
|
||||||
|
loop.run_until_complete(
|
||||||
|
tasks.gather(*to_cancel, loop=loop, return_exceptions=True))
|
||||||
|
|
||||||
|
for task in to_cancel:
|
||||||
|
if task.cancelled():
|
||||||
|
continue
|
||||||
|
if task.exception() is not None:
|
||||||
|
loop.call_exception_handler({
|
||||||
|
'message': 'unhandled exception during asyncio.run() shutdown',
|
||||||
|
'exception': task.exception(),
|
||||||
|
'task': task,
|
||||||
|
})
|
1026
Lib/asyncio/selector_events.py
Normal file
1026
Lib/asyncio/selector_events.py
Normal file
File diff suppressed because it is too large
Load diff
723
Lib/asyncio/sslproto.py
Normal file
723
Lib/asyncio/sslproto.py
Normal file
|
@ -0,0 +1,723 @@
|
||||||
|
import collections
|
||||||
|
import warnings
|
||||||
|
try:
|
||||||
|
import ssl
|
||||||
|
except ImportError: # pragma: no cover
|
||||||
|
ssl = None
|
||||||
|
|
||||||
|
from . import base_events
|
||||||
|
from . import constants
|
||||||
|
from . import protocols
|
||||||
|
from . import transports
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
def _create_transport_context(server_side, server_hostname):
|
||||||
|
if server_side:
|
||||||
|
raise ValueError('Server side SSL needs a valid SSLContext')
|
||||||
|
|
||||||
|
# Client side may pass ssl=True to use a default
|
||||||
|
# context; in that case the sslcontext passed is None.
|
||||||
|
# The default is secure for client connections.
|
||||||
|
# Python 3.4+: use up-to-date strong settings.
|
||||||
|
sslcontext = ssl.create_default_context()
|
||||||
|
if not server_hostname:
|
||||||
|
sslcontext.check_hostname = False
|
||||||
|
return sslcontext
|
||||||
|
|
||||||
|
|
||||||
|
# States of an _SSLPipe.
|
||||||
|
_UNWRAPPED = "UNWRAPPED"
|
||||||
|
_DO_HANDSHAKE = "DO_HANDSHAKE"
|
||||||
|
_WRAPPED = "WRAPPED"
|
||||||
|
_SHUTDOWN = "SHUTDOWN"
|
||||||
|
|
||||||
|
|
||||||
|
class _SSLPipe(object):
|
||||||
|
"""An SSL "Pipe".
|
||||||
|
|
||||||
|
An SSL pipe allows you to communicate with an SSL/TLS protocol instance
|
||||||
|
through memory buffers. It can be used to implement a security layer for an
|
||||||
|
existing connection where you don't have access to the connection's file
|
||||||
|
descriptor, or for some reason you don't want to use it.
|
||||||
|
|
||||||
|
An SSL pipe can be in "wrapped" and "unwrapped" mode. In unwrapped mode,
|
||||||
|
data is passed through untransformed. In wrapped mode, application level
|
||||||
|
data is encrypted to SSL record level data and vice versa. The SSL record
|
||||||
|
level is the lowest level in the SSL protocol suite and is what travels
|
||||||
|
as-is over the wire.
|
||||||
|
|
||||||
|
An SslPipe initially is in "unwrapped" mode. To start SSL, call
|
||||||
|
do_handshake(). To shutdown SSL again, call unwrap().
|
||||||
|
"""
|
||||||
|
|
||||||
|
max_size = 256 * 1024 # Buffer size passed to read()
|
||||||
|
|
||||||
|
def __init__(self, context, server_side, server_hostname=None):
|
||||||
|
"""
|
||||||
|
The *context* argument specifies the ssl.SSLContext to use.
|
||||||
|
|
||||||
|
The *server_side* argument indicates whether this is a server side or
|
||||||
|
client side transport.
|
||||||
|
|
||||||
|
The optional *server_hostname* argument can be used to specify the
|
||||||
|
hostname you are connecting to. You may only specify this parameter if
|
||||||
|
the _ssl module supports Server Name Indication (SNI).
|
||||||
|
"""
|
||||||
|
self._context = context
|
||||||
|
self._server_side = server_side
|
||||||
|
self._server_hostname = server_hostname
|
||||||
|
self._state = _UNWRAPPED
|
||||||
|
self._incoming = ssl.MemoryBIO()
|
||||||
|
self._outgoing = ssl.MemoryBIO()
|
||||||
|
self._sslobj = None
|
||||||
|
self._need_ssldata = False
|
||||||
|
self._handshake_cb = None
|
||||||
|
self._shutdown_cb = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def context(self):
|
||||||
|
"""The SSL context passed to the constructor."""
|
||||||
|
return self._context
|
||||||
|
|
||||||
|
@property
|
||||||
|
def ssl_object(self):
|
||||||
|
"""The internal ssl.SSLObject instance.
|
||||||
|
|
||||||
|
Return None if the pipe is not wrapped.
|
||||||
|
"""
|
||||||
|
return self._sslobj
|
||||||
|
|
||||||
|
@property
|
||||||
|
def need_ssldata(self):
|
||||||
|
"""Whether more record level data is needed to complete a handshake
|
||||||
|
that is currently in progress."""
|
||||||
|
return self._need_ssldata
|
||||||
|
|
||||||
|
@property
|
||||||
|
def wrapped(self):
|
||||||
|
"""
|
||||||
|
Whether a security layer is currently in effect.
|
||||||
|
|
||||||
|
Return False during handshake.
|
||||||
|
"""
|
||||||
|
return self._state == _WRAPPED
|
||||||
|
|
||||||
|
def do_handshake(self, callback=None):
|
||||||
|
"""Start the SSL handshake.
|
||||||
|
|
||||||
|
Return a list of ssldata. A ssldata element is a list of buffers
|
||||||
|
|
||||||
|
The optional *callback* argument can be used to install a callback that
|
||||||
|
will be called when the handshake is complete. The callback will be
|
||||||
|
called with None if successful, else an exception instance.
|
||||||
|
"""
|
||||||
|
if self._state != _UNWRAPPED:
|
||||||
|
raise RuntimeError('handshake in progress or completed')
|
||||||
|
self._sslobj = self._context.wrap_bio(
|
||||||
|
self._incoming, self._outgoing,
|
||||||
|
server_side=self._server_side,
|
||||||
|
server_hostname=self._server_hostname)
|
||||||
|
self._state = _DO_HANDSHAKE
|
||||||
|
self._handshake_cb = callback
|
||||||
|
ssldata, appdata = self.feed_ssldata(b'', only_handshake=True)
|
||||||
|
assert len(appdata) == 0
|
||||||
|
return ssldata
|
||||||
|
|
||||||
|
def shutdown(self, callback=None):
|
||||||
|
"""Start the SSL shutdown sequence.
|
||||||
|
|
||||||
|
Return a list of ssldata. A ssldata element is a list of buffers
|
||||||
|
|
||||||
|
The optional *callback* argument can be used to install a callback that
|
||||||
|
will be called when the shutdown is complete. The callback will be
|
||||||
|
called without arguments.
|
||||||
|
"""
|
||||||
|
if self._state == _UNWRAPPED:
|
||||||
|
raise RuntimeError('no security layer present')
|
||||||
|
if self._state == _SHUTDOWN:
|
||||||
|
raise RuntimeError('shutdown in progress')
|
||||||
|
assert self._state in (_WRAPPED, _DO_HANDSHAKE)
|
||||||
|
self._state = _SHUTDOWN
|
||||||
|
self._shutdown_cb = callback
|
||||||
|
ssldata, appdata = self.feed_ssldata(b'')
|
||||||
|
assert appdata == [] or appdata == [b'']
|
||||||
|
return ssldata
|
||||||
|
|
||||||
|
def feed_eof(self):
|
||||||
|
"""Send a potentially "ragged" EOF.
|
||||||
|
|
||||||
|
This method will raise an SSL_ERROR_EOF exception if the EOF is
|
||||||
|
unexpected.
|
||||||
|
"""
|
||||||
|
self._incoming.write_eof()
|
||||||
|
ssldata, appdata = self.feed_ssldata(b'')
|
||||||
|
assert appdata == [] or appdata == [b'']
|
||||||
|
|
||||||
|
def feed_ssldata(self, data, only_handshake=False):
|
||||||
|
"""Feed SSL record level data into the pipe.
|
||||||
|
|
||||||
|
The data must be a bytes instance. It is OK to send an empty bytes
|
||||||
|
instance. This can be used to get ssldata for a handshake initiated by
|
||||||
|
this endpoint.
|
||||||
|
|
||||||
|
Return a (ssldata, appdata) tuple. The ssldata element is a list of
|
||||||
|
buffers containing SSL data that needs to be sent to the remote SSL.
|
||||||
|
|
||||||
|
The appdata element is a list of buffers containing plaintext data that
|
||||||
|
needs to be forwarded to the application. The appdata list may contain
|
||||||
|
an empty buffer indicating an SSL "close_notify" alert. This alert must
|
||||||
|
be acknowledged by calling shutdown().
|
||||||
|
"""
|
||||||
|
if self._state == _UNWRAPPED:
|
||||||
|
# If unwrapped, pass plaintext data straight through.
|
||||||
|
if data:
|
||||||
|
appdata = [data]
|
||||||
|
else:
|
||||||
|
appdata = []
|
||||||
|
return ([], appdata)
|
||||||
|
|
||||||
|
self._need_ssldata = False
|
||||||
|
if data:
|
||||||
|
self._incoming.write(data)
|
||||||
|
|
||||||
|
ssldata = []
|
||||||
|
appdata = []
|
||||||
|
try:
|
||||||
|
if self._state == _DO_HANDSHAKE:
|
||||||
|
# Call do_handshake() until it doesn't raise anymore.
|
||||||
|
self._sslobj.do_handshake()
|
||||||
|
self._state = _WRAPPED
|
||||||
|
if self._handshake_cb:
|
||||||
|
self._handshake_cb(None)
|
||||||
|
if only_handshake:
|
||||||
|
return (ssldata, appdata)
|
||||||
|
# Handshake done: execute the wrapped block
|
||||||
|
|
||||||
|
if self._state == _WRAPPED:
|
||||||
|
# Main state: read data from SSL until close_notify
|
||||||
|
while True:
|
||||||
|
chunk = self._sslobj.read(self.max_size)
|
||||||
|
appdata.append(chunk)
|
||||||
|
if not chunk: # close_notify
|
||||||
|
break
|
||||||
|
|
||||||
|
elif self._state == _SHUTDOWN:
|
||||||
|
# Call shutdown() until it doesn't raise anymore.
|
||||||
|
self._sslobj.unwrap()
|
||||||
|
self._sslobj = None
|
||||||
|
self._state = _UNWRAPPED
|
||||||
|
if self._shutdown_cb:
|
||||||
|
self._shutdown_cb()
|
||||||
|
|
||||||
|
elif self._state == _UNWRAPPED:
|
||||||
|
# Drain possible plaintext data after close_notify.
|
||||||
|
appdata.append(self._incoming.read())
|
||||||
|
except (ssl.SSLError, ssl.CertificateError) as exc:
|
||||||
|
exc_errno = getattr(exc, 'errno', None)
|
||||||
|
if exc_errno not in (
|
||||||
|
ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE,
|
||||||
|
ssl.SSL_ERROR_SYSCALL):
|
||||||
|
if self._state == _DO_HANDSHAKE and self._handshake_cb:
|
||||||
|
self._handshake_cb(exc)
|
||||||
|
raise
|
||||||
|
self._need_ssldata = (exc_errno == ssl.SSL_ERROR_WANT_READ)
|
||||||
|
|
||||||
|
# Check for record level data that needs to be sent back.
|
||||||
|
# Happens for the initial handshake and renegotiations.
|
||||||
|
if self._outgoing.pending:
|
||||||
|
ssldata.append(self._outgoing.read())
|
||||||
|
return (ssldata, appdata)
|
||||||
|
|
||||||
|
def feed_appdata(self, data, offset=0):
|
||||||
|
"""Feed plaintext data into the pipe.
|
||||||
|
|
||||||
|
Return an (ssldata, offset) tuple. The ssldata element is a list of
|
||||||
|
buffers containing record level data that needs to be sent to the
|
||||||
|
remote SSL instance. The offset is the number of plaintext bytes that
|
||||||
|
were processed, which may be less than the length of data.
|
||||||
|
|
||||||
|
NOTE: In case of short writes, this call MUST be retried with the SAME
|
||||||
|
buffer passed into the *data* argument (i.e. the id() must be the
|
||||||
|
same). This is an OpenSSL requirement. A further particularity is that
|
||||||
|
a short write will always have offset == 0, because the _ssl module
|
||||||
|
does not enable partial writes. And even though the offset is zero,
|
||||||
|
there will still be encrypted data in ssldata.
|
||||||
|
"""
|
||||||
|
assert 0 <= offset <= len(data)
|
||||||
|
if self._state == _UNWRAPPED:
|
||||||
|
# pass through data in unwrapped mode
|
||||||
|
if offset < len(data):
|
||||||
|
ssldata = [data[offset:]]
|
||||||
|
else:
|
||||||
|
ssldata = []
|
||||||
|
return (ssldata, len(data))
|
||||||
|
|
||||||
|
ssldata = []
|
||||||
|
view = memoryview(data)
|
||||||
|
while True:
|
||||||
|
self._need_ssldata = False
|
||||||
|
try:
|
||||||
|
if offset < len(view):
|
||||||
|
offset += self._sslobj.write(view[offset:])
|
||||||
|
except ssl.SSLError as exc:
|
||||||
|
# It is not allowed to call write() after unwrap() until the
|
||||||
|
# close_notify is acknowledged. We return the condition to the
|
||||||
|
# caller as a short write.
|
||||||
|
exc_errno = getattr(exc, 'errno', None)
|
||||||
|
if exc.reason == 'PROTOCOL_IS_SHUTDOWN':
|
||||||
|
exc_errno = exc.errno = ssl.SSL_ERROR_WANT_READ
|
||||||
|
if exc_errno not in (ssl.SSL_ERROR_WANT_READ,
|
||||||
|
ssl.SSL_ERROR_WANT_WRITE,
|
||||||
|
ssl.SSL_ERROR_SYSCALL):
|
||||||
|
raise
|
||||||
|
self._need_ssldata = (exc_errno == ssl.SSL_ERROR_WANT_READ)
|
||||||
|
|
||||||
|
# See if there's any record level data back for us.
|
||||||
|
if self._outgoing.pending:
|
||||||
|
ssldata.append(self._outgoing.read())
|
||||||
|
if offset == len(view) or self._need_ssldata:
|
||||||
|
break
|
||||||
|
return (ssldata, offset)
|
||||||
|
|
||||||
|
|
||||||
|
class _SSLProtocolTransport(transports._FlowControlMixin,
|
||||||
|
transports.Transport):
|
||||||
|
|
||||||
|
_sendfile_compatible = constants._SendfileMode.FALLBACK
|
||||||
|
|
||||||
|
def __init__(self, loop, ssl_protocol):
|
||||||
|
self._loop = loop
|
||||||
|
# SSLProtocol instance
|
||||||
|
self._ssl_protocol = ssl_protocol
|
||||||
|
self._closed = False
|
||||||
|
|
||||||
|
def get_extra_info(self, name, default=None):
|
||||||
|
"""Get optional transport information."""
|
||||||
|
return self._ssl_protocol._get_extra_info(name, default)
|
||||||
|
|
||||||
|
def set_protocol(self, protocol):
|
||||||
|
self._ssl_protocol._set_app_protocol(protocol)
|
||||||
|
|
||||||
|
def get_protocol(self):
|
||||||
|
return self._ssl_protocol._app_protocol
|
||||||
|
|
||||||
|
def is_closing(self):
|
||||||
|
return self._closed
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""Close the transport.
|
||||||
|
|
||||||
|
Buffered data will be flushed asynchronously. No more data
|
||||||
|
will be received. After all buffered data is flushed, the
|
||||||
|
protocol's connection_lost() method will (eventually) called
|
||||||
|
with None as its argument.
|
||||||
|
"""
|
||||||
|
self._closed = True
|
||||||
|
self._ssl_protocol._start_shutdown()
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if not self._closed:
|
||||||
|
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
|
||||||
|
source=self)
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def is_reading(self):
|
||||||
|
tr = self._ssl_protocol._transport
|
||||||
|
if tr is None:
|
||||||
|
raise RuntimeError('SSL transport has not been initialized yet')
|
||||||
|
return tr.is_reading()
|
||||||
|
|
||||||
|
def pause_reading(self):
|
||||||
|
"""Pause the receiving end.
|
||||||
|
|
||||||
|
No data will be passed to the protocol's data_received()
|
||||||
|
method until resume_reading() is called.
|
||||||
|
"""
|
||||||
|
self._ssl_protocol._transport.pause_reading()
|
||||||
|
|
||||||
|
def resume_reading(self):
|
||||||
|
"""Resume the receiving end.
|
||||||
|
|
||||||
|
Data received will once again be passed to the protocol's
|
||||||
|
data_received() method.
|
||||||
|
"""
|
||||||
|
self._ssl_protocol._transport.resume_reading()
|
||||||
|
|
||||||
|
def set_write_buffer_limits(self, high=None, low=None):
|
||||||
|
"""Set the high- and low-water limits for write flow control.
|
||||||
|
|
||||||
|
These two values control when to call the protocol's
|
||||||
|
pause_writing() and resume_writing() methods. If specified,
|
||||||
|
the low-water limit must be less than or equal to the
|
||||||
|
high-water limit. Neither value can be negative.
|
||||||
|
|
||||||
|
The defaults are implementation-specific. If only the
|
||||||
|
high-water limit is given, the low-water limit defaults to an
|
||||||
|
implementation-specific value less than or equal to the
|
||||||
|
high-water limit. Setting high to zero forces low to zero as
|
||||||
|
well, and causes pause_writing() to be called whenever the
|
||||||
|
buffer becomes non-empty. Setting low to zero causes
|
||||||
|
resume_writing() to be called only once the buffer is empty.
|
||||||
|
Use of zero for either limit is generally sub-optimal as it
|
||||||
|
reduces opportunities for doing I/O and computation
|
||||||
|
concurrently.
|
||||||
|
"""
|
||||||
|
self._ssl_protocol._transport.set_write_buffer_limits(high, low)
|
||||||
|
|
||||||
|
def get_write_buffer_size(self):
|
||||||
|
"""Return the current size of the write buffer."""
|
||||||
|
return self._ssl_protocol._transport.get_write_buffer_size()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _protocol_paused(self):
|
||||||
|
# Required for sendfile fallback pause_writing/resume_writing logic
|
||||||
|
return self._ssl_protocol._transport._protocol_paused
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
"""Write some data bytes to the transport.
|
||||||
|
|
||||||
|
This does not block; it buffers the data and arranges for it
|
||||||
|
to be sent out asynchronously.
|
||||||
|
"""
|
||||||
|
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||||
|
raise TypeError(f"data: expecting a bytes-like instance, "
|
||||||
|
f"got {type(data).__name__}")
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
self._ssl_protocol._write_appdata(data)
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
"""Return True if this transport supports write_eof(), False if not."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
def abort(self):
|
||||||
|
"""Close the transport immediately.
|
||||||
|
|
||||||
|
Buffered data will be lost. No more data will be received.
|
||||||
|
The protocol's connection_lost() method will (eventually) be
|
||||||
|
called with None as its argument.
|
||||||
|
"""
|
||||||
|
self._ssl_protocol._abort()
|
||||||
|
self._closed = True
|
||||||
|
|
||||||
|
|
||||||
|
class SSLProtocol(protocols.Protocol):
|
||||||
|
"""SSL protocol.
|
||||||
|
|
||||||
|
Implementation of SSL on top of a socket using incoming and outgoing
|
||||||
|
buffers which are ssl.MemoryBIO objects.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, loop, app_protocol, sslcontext, waiter,
|
||||||
|
server_side=False, server_hostname=None,
|
||||||
|
call_connection_made=True,
|
||||||
|
ssl_handshake_timeout=None):
|
||||||
|
if ssl is None:
|
||||||
|
raise RuntimeError('stdlib ssl module not available')
|
||||||
|
|
||||||
|
if ssl_handshake_timeout is None:
|
||||||
|
ssl_handshake_timeout = constants.SSL_HANDSHAKE_TIMEOUT
|
||||||
|
elif ssl_handshake_timeout <= 0:
|
||||||
|
raise ValueError(
|
||||||
|
f"ssl_handshake_timeout should be a positive number, "
|
||||||
|
f"got {ssl_handshake_timeout}")
|
||||||
|
|
||||||
|
if not sslcontext:
|
||||||
|
sslcontext = _create_transport_context(
|
||||||
|
server_side, server_hostname)
|
||||||
|
|
||||||
|
self._server_side = server_side
|
||||||
|
if server_hostname and not server_side:
|
||||||
|
self._server_hostname = server_hostname
|
||||||
|
else:
|
||||||
|
self._server_hostname = None
|
||||||
|
self._sslcontext = sslcontext
|
||||||
|
# SSL-specific extra info. More info are set when the handshake
|
||||||
|
# completes.
|
||||||
|
self._extra = dict(sslcontext=sslcontext)
|
||||||
|
|
||||||
|
# App data write buffering
|
||||||
|
self._write_backlog = collections.deque()
|
||||||
|
self._write_buffer_size = 0
|
||||||
|
|
||||||
|
self._waiter = waiter
|
||||||
|
self._loop = loop
|
||||||
|
self._set_app_protocol(app_protocol)
|
||||||
|
self._app_transport = _SSLProtocolTransport(self._loop, self)
|
||||||
|
# _SSLPipe instance (None until the connection is made)
|
||||||
|
self._sslpipe = None
|
||||||
|
self._session_established = False
|
||||||
|
self._in_handshake = False
|
||||||
|
self._in_shutdown = False
|
||||||
|
# transport, ex: SelectorSocketTransport
|
||||||
|
self._transport = None
|
||||||
|
self._call_connection_made = call_connection_made
|
||||||
|
self._ssl_handshake_timeout = ssl_handshake_timeout
|
||||||
|
|
||||||
|
def _set_app_protocol(self, app_protocol):
|
||||||
|
self._app_protocol = app_protocol
|
||||||
|
self._app_protocol_is_buffer = \
|
||||||
|
isinstance(app_protocol, protocols.BufferedProtocol)
|
||||||
|
|
||||||
|
def _wakeup_waiter(self, exc=None):
|
||||||
|
if self._waiter is None:
|
||||||
|
return
|
||||||
|
if not self._waiter.cancelled():
|
||||||
|
if exc is not None:
|
||||||
|
self._waiter.set_exception(exc)
|
||||||
|
else:
|
||||||
|
self._waiter.set_result(None)
|
||||||
|
self._waiter = None
|
||||||
|
|
||||||
|
def connection_made(self, transport):
|
||||||
|
"""Called when the low-level connection is made.
|
||||||
|
|
||||||
|
Start the SSL handshake.
|
||||||
|
"""
|
||||||
|
self._transport = transport
|
||||||
|
self._sslpipe = _SSLPipe(self._sslcontext,
|
||||||
|
self._server_side,
|
||||||
|
self._server_hostname)
|
||||||
|
self._start_handshake()
|
||||||
|
|
||||||
|
def connection_lost(self, exc):
|
||||||
|
"""Called when the low-level connection is lost or closed.
|
||||||
|
|
||||||
|
The argument is an exception object or None (the latter
|
||||||
|
meaning a regular EOF is received or the connection was
|
||||||
|
aborted or closed).
|
||||||
|
"""
|
||||||
|
if self._session_established:
|
||||||
|
self._session_established = False
|
||||||
|
self._loop.call_soon(self._app_protocol.connection_lost, exc)
|
||||||
|
else:
|
||||||
|
# Most likely an exception occurred while in SSL handshake.
|
||||||
|
# Just mark the app transport as closed so that its __del__
|
||||||
|
# doesn't complain.
|
||||||
|
if self._app_transport is not None:
|
||||||
|
self._app_transport._closed = True
|
||||||
|
self._transport = None
|
||||||
|
self._app_transport = None
|
||||||
|
self._wakeup_waiter(exc)
|
||||||
|
|
||||||
|
def pause_writing(self):
|
||||||
|
"""Called when the low-level transport's buffer goes over
|
||||||
|
the high-water mark.
|
||||||
|
"""
|
||||||
|
self._app_protocol.pause_writing()
|
||||||
|
|
||||||
|
def resume_writing(self):
|
||||||
|
"""Called when the low-level transport's buffer drains below
|
||||||
|
the low-water mark.
|
||||||
|
"""
|
||||||
|
self._app_protocol.resume_writing()
|
||||||
|
|
||||||
|
def data_received(self, data):
|
||||||
|
"""Called when some SSL data is received.
|
||||||
|
|
||||||
|
The argument is a bytes object.
|
||||||
|
"""
|
||||||
|
if self._sslpipe is None:
|
||||||
|
# transport closing, sslpipe is destroyed
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
ssldata, appdata = self._sslpipe.feed_ssldata(data)
|
||||||
|
except Exception as e:
|
||||||
|
self._fatal_error(e, 'SSL error in data received')
|
||||||
|
return
|
||||||
|
|
||||||
|
for chunk in ssldata:
|
||||||
|
self._transport.write(chunk)
|
||||||
|
|
||||||
|
for chunk in appdata:
|
||||||
|
if chunk:
|
||||||
|
try:
|
||||||
|
if self._app_protocol_is_buffer:
|
||||||
|
protocols._feed_data_to_buffered_proto(
|
||||||
|
self._app_protocol, chunk)
|
||||||
|
else:
|
||||||
|
self._app_protocol.data_received(chunk)
|
||||||
|
except Exception as ex:
|
||||||
|
self._fatal_error(
|
||||||
|
ex, 'application protocol failed to receive SSL data')
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
self._start_shutdown()
|
||||||
|
break
|
||||||
|
|
||||||
|
def eof_received(self):
|
||||||
|
"""Called when the other end of the low-level stream
|
||||||
|
is half-closed.
|
||||||
|
|
||||||
|
If this returns a false value (including None), the transport
|
||||||
|
will close itself. If it returns a true value, closing the
|
||||||
|
transport is up to the protocol.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r received EOF", self)
|
||||||
|
|
||||||
|
self._wakeup_waiter(ConnectionResetError)
|
||||||
|
|
||||||
|
if not self._in_handshake:
|
||||||
|
keep_open = self._app_protocol.eof_received()
|
||||||
|
if keep_open:
|
||||||
|
logger.warning('returning true from eof_received() '
|
||||||
|
'has no effect when using ssl')
|
||||||
|
finally:
|
||||||
|
self._transport.close()
|
||||||
|
|
||||||
|
def _get_extra_info(self, name, default=None):
|
||||||
|
if name in self._extra:
|
||||||
|
return self._extra[name]
|
||||||
|
elif self._transport is not None:
|
||||||
|
return self._transport.get_extra_info(name, default)
|
||||||
|
else:
|
||||||
|
return default
|
||||||
|
|
||||||
|
def _start_shutdown(self):
|
||||||
|
if self._in_shutdown:
|
||||||
|
return
|
||||||
|
if self._in_handshake:
|
||||||
|
self._abort()
|
||||||
|
else:
|
||||||
|
self._in_shutdown = True
|
||||||
|
self._write_appdata(b'')
|
||||||
|
|
||||||
|
def _write_appdata(self, data):
|
||||||
|
self._write_backlog.append((data, 0))
|
||||||
|
self._write_buffer_size += len(data)
|
||||||
|
self._process_write_backlog()
|
||||||
|
|
||||||
|
def _start_handshake(self):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r starts SSL handshake", self)
|
||||||
|
self._handshake_start_time = self._loop.time()
|
||||||
|
else:
|
||||||
|
self._handshake_start_time = None
|
||||||
|
self._in_handshake = True
|
||||||
|
# (b'', 1) is a special value in _process_write_backlog() to do
|
||||||
|
# the SSL handshake
|
||||||
|
self._write_backlog.append((b'', 1))
|
||||||
|
self._handshake_timeout_handle = \
|
||||||
|
self._loop.call_later(self._ssl_handshake_timeout,
|
||||||
|
self._check_handshake_timeout)
|
||||||
|
self._process_write_backlog()
|
||||||
|
|
||||||
|
def _check_handshake_timeout(self):
|
||||||
|
if self._in_handshake is True:
|
||||||
|
msg = (
|
||||||
|
f"SSL handshake is taking longer than "
|
||||||
|
f"{self._ssl_handshake_timeout} seconds: "
|
||||||
|
f"aborting the connection"
|
||||||
|
)
|
||||||
|
self._fatal_error(ConnectionAbortedError(msg))
|
||||||
|
|
||||||
|
def _on_handshake_complete(self, handshake_exc):
|
||||||
|
self._in_handshake = False
|
||||||
|
self._handshake_timeout_handle.cancel()
|
||||||
|
|
||||||
|
sslobj = self._sslpipe.ssl_object
|
||||||
|
try:
|
||||||
|
if handshake_exc is not None:
|
||||||
|
raise handshake_exc
|
||||||
|
|
||||||
|
peercert = sslobj.getpeercert()
|
||||||
|
except Exception as exc:
|
||||||
|
if isinstance(exc, ssl.CertificateError):
|
||||||
|
msg = 'SSL handshake failed on verifying the certificate'
|
||||||
|
else:
|
||||||
|
msg = 'SSL handshake failed'
|
||||||
|
self._fatal_error(exc, msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
if self._loop.get_debug():
|
||||||
|
dt = self._loop.time() - self._handshake_start_time
|
||||||
|
logger.debug("%r: SSL handshake took %.1f ms", self, dt * 1e3)
|
||||||
|
|
||||||
|
# Add extra info that becomes available after handshake.
|
||||||
|
self._extra.update(peercert=peercert,
|
||||||
|
cipher=sslobj.cipher(),
|
||||||
|
compression=sslobj.compression(),
|
||||||
|
ssl_object=sslobj,
|
||||||
|
)
|
||||||
|
if self._call_connection_made:
|
||||||
|
self._app_protocol.connection_made(self._app_transport)
|
||||||
|
self._wakeup_waiter()
|
||||||
|
self._session_established = True
|
||||||
|
# In case transport.write() was already called. Don't call
|
||||||
|
# immediately _process_write_backlog(), but schedule it:
|
||||||
|
# _on_handshake_complete() can be called indirectly from
|
||||||
|
# _process_write_backlog(), and _process_write_backlog() is not
|
||||||
|
# reentrant.
|
||||||
|
self._loop.call_soon(self._process_write_backlog)
|
||||||
|
|
||||||
|
def _process_write_backlog(self):
|
||||||
|
# Try to make progress on the write backlog.
|
||||||
|
if self._transport is None or self._sslpipe is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
for i in range(len(self._write_backlog)):
|
||||||
|
data, offset = self._write_backlog[0]
|
||||||
|
if data:
|
||||||
|
ssldata, offset = self._sslpipe.feed_appdata(data, offset)
|
||||||
|
elif offset:
|
||||||
|
ssldata = self._sslpipe.do_handshake(
|
||||||
|
self._on_handshake_complete)
|
||||||
|
offset = 1
|
||||||
|
else:
|
||||||
|
ssldata = self._sslpipe.shutdown(self._finalize)
|
||||||
|
offset = 1
|
||||||
|
|
||||||
|
for chunk in ssldata:
|
||||||
|
self._transport.write(chunk)
|
||||||
|
|
||||||
|
if offset < len(data):
|
||||||
|
self._write_backlog[0] = (data, offset)
|
||||||
|
# A short write means that a write is blocked on a read
|
||||||
|
# We need to enable reading if it is paused!
|
||||||
|
assert self._sslpipe.need_ssldata
|
||||||
|
if self._transport._paused:
|
||||||
|
self._transport.resume_reading()
|
||||||
|
break
|
||||||
|
|
||||||
|
# An entire chunk from the backlog was processed. We can
|
||||||
|
# delete it and reduce the outstanding buffer size.
|
||||||
|
del self._write_backlog[0]
|
||||||
|
self._write_buffer_size -= len(data)
|
||||||
|
except Exception as exc:
|
||||||
|
if self._in_handshake:
|
||||||
|
# Exceptions will be re-raised in _on_handshake_complete.
|
||||||
|
self._on_handshake_complete(exc)
|
||||||
|
else:
|
||||||
|
self._fatal_error(exc, 'Fatal error on SSL transport')
|
||||||
|
|
||||||
|
def _fatal_error(self, exc, message='Fatal error on transport'):
|
||||||
|
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r: %s", self, message, exc_info=True)
|
||||||
|
else:
|
||||||
|
self._loop.call_exception_handler({
|
||||||
|
'message': message,
|
||||||
|
'exception': exc,
|
||||||
|
'transport': self._transport,
|
||||||
|
'protocol': self,
|
||||||
|
})
|
||||||
|
if self._transport:
|
||||||
|
self._transport._force_close(exc)
|
||||||
|
|
||||||
|
def _finalize(self):
|
||||||
|
self._sslpipe = None
|
||||||
|
|
||||||
|
if self._transport is not None:
|
||||||
|
self._transport.close()
|
||||||
|
|
||||||
|
def _abort(self):
|
||||||
|
try:
|
||||||
|
if self._transport is not None:
|
||||||
|
self._transport.abort()
|
||||||
|
finally:
|
||||||
|
self._finalize()
|
697
Lib/asyncio/streams.py
Normal file
697
Lib/asyncio/streams.py
Normal file
|
@ -0,0 +1,697 @@
|
||||||
|
__all__ = (
|
||||||
|
'StreamReader', 'StreamWriter', 'StreamReaderProtocol',
|
||||||
|
'open_connection', 'start_server',
|
||||||
|
'IncompleteReadError', 'LimitOverrunError',
|
||||||
|
)
|
||||||
|
|
||||||
|
import socket
|
||||||
|
|
||||||
|
if hasattr(socket, 'AF_UNIX'):
|
||||||
|
__all__ += ('open_unix_connection', 'start_unix_server')
|
||||||
|
|
||||||
|
from . import coroutines
|
||||||
|
from . import events
|
||||||
|
from . import protocols
|
||||||
|
from .log import logger
|
||||||
|
from .tasks import sleep
|
||||||
|
|
||||||
|
|
||||||
|
_DEFAULT_LIMIT = 2 ** 16 # 64 KiB
|
||||||
|
|
||||||
|
|
||||||
|
class IncompleteReadError(EOFError):
|
||||||
|
"""
|
||||||
|
Incomplete read error. Attributes:
|
||||||
|
|
||||||
|
- partial: read bytes string before the end of stream was reached
|
||||||
|
- expected: total number of expected bytes (or None if unknown)
|
||||||
|
"""
|
||||||
|
def __init__(self, partial, expected):
|
||||||
|
super().__init__(f'{len(partial)} bytes read on a total of '
|
||||||
|
f'{expected!r} expected bytes')
|
||||||
|
self.partial = partial
|
||||||
|
self.expected = expected
|
||||||
|
|
||||||
|
def __reduce__(self):
|
||||||
|
return type(self), (self.partial, self.expected)
|
||||||
|
|
||||||
|
|
||||||
|
class LimitOverrunError(Exception):
|
||||||
|
"""Reached the buffer limit while looking for a separator.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
- consumed: total number of to be consumed bytes.
|
||||||
|
"""
|
||||||
|
def __init__(self, message, consumed):
|
||||||
|
super().__init__(message)
|
||||||
|
self.consumed = consumed
|
||||||
|
|
||||||
|
def __reduce__(self):
|
||||||
|
return type(self), (self.args[0], self.consumed)
|
||||||
|
|
||||||
|
|
||||||
|
async def open_connection(host=None, port=None, *,
|
||||||
|
loop=None, limit=_DEFAULT_LIMIT, **kwds):
|
||||||
|
"""A wrapper for create_connection() returning a (reader, writer) pair.
|
||||||
|
|
||||||
|
The reader returned is a StreamReader instance; the writer is a
|
||||||
|
StreamWriter instance.
|
||||||
|
|
||||||
|
The arguments are all the usual arguments to create_connection()
|
||||||
|
except protocol_factory; most common are positional host and port,
|
||||||
|
with various optional keyword arguments following.
|
||||||
|
|
||||||
|
Additional optional keyword arguments are loop (to set the event loop
|
||||||
|
instance to use) and limit (to set the buffer limit passed to the
|
||||||
|
StreamReader).
|
||||||
|
|
||||||
|
(If you want to customize the StreamReader and/or
|
||||||
|
StreamReaderProtocol classes, just copy the code -- there's
|
||||||
|
really nothing special here except some convenience.)
|
||||||
|
"""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
reader = StreamReader(limit=limit, loop=loop)
|
||||||
|
protocol = StreamReaderProtocol(reader, loop=loop)
|
||||||
|
transport, _ = await loop.create_connection(
|
||||||
|
lambda: protocol, host, port, **kwds)
|
||||||
|
writer = StreamWriter(transport, protocol, reader, loop)
|
||||||
|
return reader, writer
|
||||||
|
|
||||||
|
|
||||||
|
async def start_server(client_connected_cb, host=None, port=None, *,
|
||||||
|
loop=None, limit=_DEFAULT_LIMIT, **kwds):
|
||||||
|
"""Start a socket server, call back for each client connected.
|
||||||
|
|
||||||
|
The first parameter, `client_connected_cb`, takes two parameters:
|
||||||
|
client_reader, client_writer. client_reader is a StreamReader
|
||||||
|
object, while client_writer is a StreamWriter object. This
|
||||||
|
parameter can either be a plain callback function or a coroutine;
|
||||||
|
if it is a coroutine, it will be automatically converted into a
|
||||||
|
Task.
|
||||||
|
|
||||||
|
The rest of the arguments are all the usual arguments to
|
||||||
|
loop.create_server() except protocol_factory; most common are
|
||||||
|
positional host and port, with various optional keyword arguments
|
||||||
|
following. The return value is the same as loop.create_server().
|
||||||
|
|
||||||
|
Additional optional keyword arguments are loop (to set the event loop
|
||||||
|
instance to use) and limit (to set the buffer limit passed to the
|
||||||
|
StreamReader).
|
||||||
|
|
||||||
|
The return value is the same as loop.create_server(), i.e. a
|
||||||
|
Server object which can be used to stop the service.
|
||||||
|
"""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
|
||||||
|
def factory():
|
||||||
|
reader = StreamReader(limit=limit, loop=loop)
|
||||||
|
protocol = StreamReaderProtocol(reader, client_connected_cb,
|
||||||
|
loop=loop)
|
||||||
|
return protocol
|
||||||
|
|
||||||
|
return await loop.create_server(factory, host, port, **kwds)
|
||||||
|
|
||||||
|
|
||||||
|
if hasattr(socket, 'AF_UNIX'):
|
||||||
|
# UNIX Domain Sockets are supported on this platform
|
||||||
|
|
||||||
|
async def open_unix_connection(path=None, *,
|
||||||
|
loop=None, limit=_DEFAULT_LIMIT, **kwds):
|
||||||
|
"""Similar to `open_connection` but works with UNIX Domain Sockets."""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
reader = StreamReader(limit=limit, loop=loop)
|
||||||
|
protocol = StreamReaderProtocol(reader, loop=loop)
|
||||||
|
transport, _ = await loop.create_unix_connection(
|
||||||
|
lambda: protocol, path, **kwds)
|
||||||
|
writer = StreamWriter(transport, protocol, reader, loop)
|
||||||
|
return reader, writer
|
||||||
|
|
||||||
|
async def start_unix_server(client_connected_cb, path=None, *,
|
||||||
|
loop=None, limit=_DEFAULT_LIMIT, **kwds):
|
||||||
|
"""Similar to `start_server` but works with UNIX Domain Sockets."""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
|
||||||
|
def factory():
|
||||||
|
reader = StreamReader(limit=limit, loop=loop)
|
||||||
|
protocol = StreamReaderProtocol(reader, client_connected_cb,
|
||||||
|
loop=loop)
|
||||||
|
return protocol
|
||||||
|
|
||||||
|
return await loop.create_unix_server(factory, path, **kwds)
|
||||||
|
|
||||||
|
|
||||||
|
class FlowControlMixin(protocols.Protocol):
|
||||||
|
"""Reusable flow control logic for StreamWriter.drain().
|
||||||
|
|
||||||
|
This implements the protocol methods pause_writing(),
|
||||||
|
resume_writing() and connection_lost(). If the subclass overrides
|
||||||
|
these it must call the super methods.
|
||||||
|
|
||||||
|
StreamWriter.drain() must wait for _drain_helper() coroutine.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, loop=None):
|
||||||
|
if loop is None:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
else:
|
||||||
|
self._loop = loop
|
||||||
|
self._paused = False
|
||||||
|
self._drain_waiter = None
|
||||||
|
self._connection_lost = False
|
||||||
|
|
||||||
|
def pause_writing(self):
|
||||||
|
assert not self._paused
|
||||||
|
self._paused = True
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r pauses writing", self)
|
||||||
|
|
||||||
|
def resume_writing(self):
|
||||||
|
assert self._paused
|
||||||
|
self._paused = False
|
||||||
|
if self._loop.get_debug():
|
||||||
|
logger.debug("%r resumes writing", self)
|
||||||
|
|
||||||
|
waiter = self._drain_waiter
|
||||||
|
if waiter is not None:
|
||||||
|
self._drain_waiter = None
|
||||||
|
if not waiter.done():
|
||||||
|
waiter.set_result(None)
|
||||||
|
|
||||||
|
def connection_lost(self, exc):
|
||||||
|
self._connection_lost = True
|
||||||
|
# Wake up the writer if currently paused.
|
||||||
|
if not self._paused:
|
||||||
|
return
|
||||||
|
waiter = self._drain_waiter
|
||||||
|
if waiter is None:
|
||||||
|
return
|
||||||
|
self._drain_waiter = None
|
||||||
|
if waiter.done():
|
||||||
|
return
|
||||||
|
if exc is None:
|
||||||
|
waiter.set_result(None)
|
||||||
|
else:
|
||||||
|
waiter.set_exception(exc)
|
||||||
|
|
||||||
|
async def _drain_helper(self):
|
||||||
|
if self._connection_lost:
|
||||||
|
raise ConnectionResetError('Connection lost')
|
||||||
|
if not self._paused:
|
||||||
|
return
|
||||||
|
waiter = self._drain_waiter
|
||||||
|
assert waiter is None or waiter.cancelled()
|
||||||
|
waiter = self._loop.create_future()
|
||||||
|
self._drain_waiter = waiter
|
||||||
|
await waiter
|
||||||
|
|
||||||
|
|
||||||
|
class StreamReaderProtocol(FlowControlMixin, protocols.Protocol):
|
||||||
|
"""Helper class to adapt between Protocol and StreamReader.
|
||||||
|
|
||||||
|
(This is a helper class instead of making StreamReader itself a
|
||||||
|
Protocol subclass, because the StreamReader has other potential
|
||||||
|
uses, and to prevent the user of the StreamReader to accidentally
|
||||||
|
call inappropriate methods of the protocol.)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, stream_reader, client_connected_cb=None, loop=None):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
self._stream_reader = stream_reader
|
||||||
|
self._stream_writer = None
|
||||||
|
self._client_connected_cb = client_connected_cb
|
||||||
|
self._over_ssl = False
|
||||||
|
self._closed = self._loop.create_future()
|
||||||
|
|
||||||
|
def connection_made(self, transport):
|
||||||
|
self._stream_reader.set_transport(transport)
|
||||||
|
self._over_ssl = transport.get_extra_info('sslcontext') is not None
|
||||||
|
if self._client_connected_cb is not None:
|
||||||
|
self._stream_writer = StreamWriter(transport, self,
|
||||||
|
self._stream_reader,
|
||||||
|
self._loop)
|
||||||
|
res = self._client_connected_cb(self._stream_reader,
|
||||||
|
self._stream_writer)
|
||||||
|
if coroutines.iscoroutine(res):
|
||||||
|
self._loop.create_task(res)
|
||||||
|
|
||||||
|
def connection_lost(self, exc):
|
||||||
|
if self._stream_reader is not None:
|
||||||
|
if exc is None:
|
||||||
|
self._stream_reader.feed_eof()
|
||||||
|
else:
|
||||||
|
self._stream_reader.set_exception(exc)
|
||||||
|
if not self._closed.done():
|
||||||
|
if exc is None:
|
||||||
|
self._closed.set_result(None)
|
||||||
|
else:
|
||||||
|
self._closed.set_exception(exc)
|
||||||
|
super().connection_lost(exc)
|
||||||
|
self._stream_reader = None
|
||||||
|
self._stream_writer = None
|
||||||
|
|
||||||
|
def data_received(self, data):
|
||||||
|
self._stream_reader.feed_data(data)
|
||||||
|
|
||||||
|
def eof_received(self):
|
||||||
|
self._stream_reader.feed_eof()
|
||||||
|
if self._over_ssl:
|
||||||
|
# Prevent a warning in SSLProtocol.eof_received:
|
||||||
|
# "returning true from eof_received()
|
||||||
|
# has no effect when using ssl"
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
# Prevent reports about unhandled exceptions.
|
||||||
|
# Better than self._closed._log_traceback = False hack
|
||||||
|
closed = self._closed
|
||||||
|
if closed.done() and not closed.cancelled():
|
||||||
|
closed.exception()
|
||||||
|
|
||||||
|
|
||||||
|
class StreamWriter:
|
||||||
|
"""Wraps a Transport.
|
||||||
|
|
||||||
|
This exposes write(), writelines(), [can_]write_eof(),
|
||||||
|
get_extra_info() and close(). It adds drain() which returns an
|
||||||
|
optional Future on which you can wait for flow control. It also
|
||||||
|
adds a transport property which references the Transport
|
||||||
|
directly.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, transport, protocol, reader, loop):
|
||||||
|
self._transport = transport
|
||||||
|
self._protocol = protocol
|
||||||
|
# drain() expects that the reader has an exception() method
|
||||||
|
assert reader is None or isinstance(reader, StreamReader)
|
||||||
|
self._reader = reader
|
||||||
|
self._loop = loop
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
info = [self.__class__.__name__, f'transport={self._transport!r}']
|
||||||
|
if self._reader is not None:
|
||||||
|
info.append(f'reader={self._reader!r}')
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def transport(self):
|
||||||
|
return self._transport
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
self._transport.write(data)
|
||||||
|
|
||||||
|
def writelines(self, data):
|
||||||
|
self._transport.writelines(data)
|
||||||
|
|
||||||
|
def write_eof(self):
|
||||||
|
return self._transport.write_eof()
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
return self._transport.can_write_eof()
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
return self._transport.close()
|
||||||
|
|
||||||
|
def is_closing(self):
|
||||||
|
return self._transport.is_closing()
|
||||||
|
|
||||||
|
async def wait_closed(self):
|
||||||
|
await self._protocol._closed
|
||||||
|
|
||||||
|
def get_extra_info(self, name, default=None):
|
||||||
|
return self._transport.get_extra_info(name, default)
|
||||||
|
|
||||||
|
async def drain(self):
|
||||||
|
"""Flush the write buffer.
|
||||||
|
|
||||||
|
The intended use is to write
|
||||||
|
|
||||||
|
w.write(data)
|
||||||
|
await w.drain()
|
||||||
|
"""
|
||||||
|
if self._reader is not None:
|
||||||
|
exc = self._reader.exception()
|
||||||
|
if exc is not None:
|
||||||
|
raise exc
|
||||||
|
if self._transport.is_closing():
|
||||||
|
# Yield to the event loop so connection_lost() may be
|
||||||
|
# called. Without this, _drain_helper() would return
|
||||||
|
# immediately, and code that calls
|
||||||
|
# write(...); await drain()
|
||||||
|
# in a loop would never call connection_lost(), so it
|
||||||
|
# would not see an error when the socket is closed.
|
||||||
|
await sleep(0, loop=self._loop)
|
||||||
|
await self._protocol._drain_helper()
|
||||||
|
|
||||||
|
|
||||||
|
class StreamReader:
|
||||||
|
|
||||||
|
def __init__(self, limit=_DEFAULT_LIMIT, loop=None):
|
||||||
|
# The line length limit is a security feature;
|
||||||
|
# it also doubles as half the buffer limit.
|
||||||
|
|
||||||
|
if limit <= 0:
|
||||||
|
raise ValueError('Limit cannot be <= 0')
|
||||||
|
|
||||||
|
self._limit = limit
|
||||||
|
if loop is None:
|
||||||
|
self._loop = events.get_event_loop()
|
||||||
|
else:
|
||||||
|
self._loop = loop
|
||||||
|
self._buffer = bytearray()
|
||||||
|
self._eof = False # Whether we're done.
|
||||||
|
self._waiter = None # A future used by _wait_for_data()
|
||||||
|
self._exception = None
|
||||||
|
self._transport = None
|
||||||
|
self._paused = False
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
info = ['StreamReader']
|
||||||
|
if self._buffer:
|
||||||
|
info.append(f'{len(self._buffer)} bytes')
|
||||||
|
if self._eof:
|
||||||
|
info.append('eof')
|
||||||
|
if self._limit != _DEFAULT_LIMIT:
|
||||||
|
info.append(f'limit={self._limit}')
|
||||||
|
if self._waiter:
|
||||||
|
info.append(f'waiter={self._waiter!r}')
|
||||||
|
if self._exception:
|
||||||
|
info.append(f'exception={self._exception!r}')
|
||||||
|
if self._transport:
|
||||||
|
info.append(f'transport={self._transport!r}')
|
||||||
|
if self._paused:
|
||||||
|
info.append('paused')
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
def exception(self):
|
||||||
|
return self._exception
|
||||||
|
|
||||||
|
def set_exception(self, exc):
|
||||||
|
self._exception = exc
|
||||||
|
|
||||||
|
waiter = self._waiter
|
||||||
|
if waiter is not None:
|
||||||
|
self._waiter = None
|
||||||
|
if not waiter.cancelled():
|
||||||
|
waiter.set_exception(exc)
|
||||||
|
|
||||||
|
def _wakeup_waiter(self):
|
||||||
|
"""Wakeup read*() functions waiting for data or EOF."""
|
||||||
|
waiter = self._waiter
|
||||||
|
if waiter is not None:
|
||||||
|
self._waiter = None
|
||||||
|
if not waiter.cancelled():
|
||||||
|
waiter.set_result(None)
|
||||||
|
|
||||||
|
def set_transport(self, transport):
|
||||||
|
assert self._transport is None, 'Transport already set'
|
||||||
|
self._transport = transport
|
||||||
|
|
||||||
|
def _maybe_resume_transport(self):
|
||||||
|
if self._paused and len(self._buffer) <= self._limit:
|
||||||
|
self._paused = False
|
||||||
|
self._transport.resume_reading()
|
||||||
|
|
||||||
|
def feed_eof(self):
|
||||||
|
self._eof = True
|
||||||
|
self._wakeup_waiter()
|
||||||
|
|
||||||
|
def at_eof(self):
|
||||||
|
"""Return True if the buffer is empty and 'feed_eof' was called."""
|
||||||
|
return self._eof and not self._buffer
|
||||||
|
|
||||||
|
def feed_data(self, data):
|
||||||
|
assert not self._eof, 'feed_data after feed_eof'
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._buffer.extend(data)
|
||||||
|
self._wakeup_waiter()
|
||||||
|
|
||||||
|
if (self._transport is not None and
|
||||||
|
not self._paused and
|
||||||
|
len(self._buffer) > 2 * self._limit):
|
||||||
|
try:
|
||||||
|
self._transport.pause_reading()
|
||||||
|
except NotImplementedError:
|
||||||
|
# The transport can't be paused.
|
||||||
|
# We'll just have to buffer all data.
|
||||||
|
# Forget the transport so we don't keep trying.
|
||||||
|
self._transport = None
|
||||||
|
else:
|
||||||
|
self._paused = True
|
||||||
|
|
||||||
|
async def _wait_for_data(self, func_name):
|
||||||
|
"""Wait until feed_data() or feed_eof() is called.
|
||||||
|
|
||||||
|
If stream was paused, automatically resume it.
|
||||||
|
"""
|
||||||
|
# StreamReader uses a future to link the protocol feed_data() method
|
||||||
|
# to a read coroutine. Running two read coroutines at the same time
|
||||||
|
# would have an unexpected behaviour. It would not possible to know
|
||||||
|
# which coroutine would get the next data.
|
||||||
|
if self._waiter is not None:
|
||||||
|
raise RuntimeError(
|
||||||
|
f'{func_name}() called while another coroutine is '
|
||||||
|
f'already waiting for incoming data')
|
||||||
|
|
||||||
|
assert not self._eof, '_wait_for_data after EOF'
|
||||||
|
|
||||||
|
# Waiting for data while paused will make deadlock, so prevent it.
|
||||||
|
# This is essential for readexactly(n) for case when n > self._limit.
|
||||||
|
if self._paused:
|
||||||
|
self._paused = False
|
||||||
|
self._transport.resume_reading()
|
||||||
|
|
||||||
|
self._waiter = self._loop.create_future()
|
||||||
|
try:
|
||||||
|
await self._waiter
|
||||||
|
finally:
|
||||||
|
self._waiter = None
|
||||||
|
|
||||||
|
async def readline(self):
|
||||||
|
"""Read chunk of data from the stream until newline (b'\n') is found.
|
||||||
|
|
||||||
|
On success, return chunk that ends with newline. If only partial
|
||||||
|
line can be read due to EOF, return incomplete line without
|
||||||
|
terminating newline. When EOF was reached while no bytes read, empty
|
||||||
|
bytes object is returned.
|
||||||
|
|
||||||
|
If limit is reached, ValueError will be raised. In that case, if
|
||||||
|
newline was found, complete line including newline will be removed
|
||||||
|
from internal buffer. Else, internal buffer will be cleared. Limit is
|
||||||
|
compared against part of the line without newline.
|
||||||
|
|
||||||
|
If stream was paused, this function will automatically resume it if
|
||||||
|
needed.
|
||||||
|
"""
|
||||||
|
sep = b'\n'
|
||||||
|
seplen = len(sep)
|
||||||
|
try:
|
||||||
|
line = await self.readuntil(sep)
|
||||||
|
except IncompleteReadError as e:
|
||||||
|
return e.partial
|
||||||
|
except LimitOverrunError as e:
|
||||||
|
if self._buffer.startswith(sep, e.consumed):
|
||||||
|
del self._buffer[:e.consumed + seplen]
|
||||||
|
else:
|
||||||
|
self._buffer.clear()
|
||||||
|
self._maybe_resume_transport()
|
||||||
|
raise ValueError(e.args[0])
|
||||||
|
return line
|
||||||
|
|
||||||
|
async def readuntil(self, separator=b'\n'):
|
||||||
|
"""Read data from the stream until ``separator`` is found.
|
||||||
|
|
||||||
|
On success, the data and separator will be removed from the
|
||||||
|
internal buffer (consumed). Returned data will include the
|
||||||
|
separator at the end.
|
||||||
|
|
||||||
|
Configured stream limit is used to check result. Limit sets the
|
||||||
|
maximal length of data that can be returned, not counting the
|
||||||
|
separator.
|
||||||
|
|
||||||
|
If an EOF occurs and the complete separator is still not found,
|
||||||
|
an IncompleteReadError exception will be raised, and the internal
|
||||||
|
buffer will be reset. The IncompleteReadError.partial attribute
|
||||||
|
may contain the separator partially.
|
||||||
|
|
||||||
|
If the data cannot be read because of over limit, a
|
||||||
|
LimitOverrunError exception will be raised, and the data
|
||||||
|
will be left in the internal buffer, so it can be read again.
|
||||||
|
"""
|
||||||
|
seplen = len(separator)
|
||||||
|
if seplen == 0:
|
||||||
|
raise ValueError('Separator should be at least one-byte string')
|
||||||
|
|
||||||
|
if self._exception is not None:
|
||||||
|
raise self._exception
|
||||||
|
|
||||||
|
# Consume whole buffer except last bytes, which length is
|
||||||
|
# one less than seplen. Let's check corner cases with
|
||||||
|
# separator='SEPARATOR':
|
||||||
|
# * we have received almost complete separator (without last
|
||||||
|
# byte). i.e buffer='some textSEPARATO'. In this case we
|
||||||
|
# can safely consume len(separator) - 1 bytes.
|
||||||
|
# * last byte of buffer is first byte of separator, i.e.
|
||||||
|
# buffer='abcdefghijklmnopqrS'. We may safely consume
|
||||||
|
# everything except that last byte, but this require to
|
||||||
|
# analyze bytes of buffer that match partial separator.
|
||||||
|
# This is slow and/or require FSM. For this case our
|
||||||
|
# implementation is not optimal, since require rescanning
|
||||||
|
# of data that is known to not belong to separator. In
|
||||||
|
# real world, separator will not be so long to notice
|
||||||
|
# performance problems. Even when reading MIME-encoded
|
||||||
|
# messages :)
|
||||||
|
|
||||||
|
# `offset` is the number of bytes from the beginning of the buffer
|
||||||
|
# where there is no occurrence of `separator`.
|
||||||
|
offset = 0
|
||||||
|
|
||||||
|
# Loop until we find `separator` in the buffer, exceed the buffer size,
|
||||||
|
# or an EOF has happened.
|
||||||
|
while True:
|
||||||
|
buflen = len(self._buffer)
|
||||||
|
|
||||||
|
# Check if we now have enough data in the buffer for `separator` to
|
||||||
|
# fit.
|
||||||
|
if buflen - offset >= seplen:
|
||||||
|
isep = self._buffer.find(separator, offset)
|
||||||
|
|
||||||
|
if isep != -1:
|
||||||
|
# `separator` is in the buffer. `isep` will be used later
|
||||||
|
# to retrieve the data.
|
||||||
|
break
|
||||||
|
|
||||||
|
# see upper comment for explanation.
|
||||||
|
offset = buflen + 1 - seplen
|
||||||
|
if offset > self._limit:
|
||||||
|
raise LimitOverrunError(
|
||||||
|
'Separator is not found, and chunk exceed the limit',
|
||||||
|
offset)
|
||||||
|
|
||||||
|
# Complete message (with full separator) may be present in buffer
|
||||||
|
# even when EOF flag is set. This may happen when the last chunk
|
||||||
|
# adds data which makes separator be found. That's why we check for
|
||||||
|
# EOF *ater* inspecting the buffer.
|
||||||
|
if self._eof:
|
||||||
|
chunk = bytes(self._buffer)
|
||||||
|
self._buffer.clear()
|
||||||
|
raise IncompleteReadError(chunk, None)
|
||||||
|
|
||||||
|
# _wait_for_data() will resume reading if stream was paused.
|
||||||
|
await self._wait_for_data('readuntil')
|
||||||
|
|
||||||
|
if isep > self._limit:
|
||||||
|
raise LimitOverrunError(
|
||||||
|
'Separator is found, but chunk is longer than limit', isep)
|
||||||
|
|
||||||
|
chunk = self._buffer[:isep + seplen]
|
||||||
|
del self._buffer[:isep + seplen]
|
||||||
|
self._maybe_resume_transport()
|
||||||
|
return bytes(chunk)
|
||||||
|
|
||||||
|
async def read(self, n=-1):
|
||||||
|
"""Read up to `n` bytes from the stream.
|
||||||
|
|
||||||
|
If n is not provided, or set to -1, read until EOF and return all read
|
||||||
|
bytes. If the EOF was received and the internal buffer is empty, return
|
||||||
|
an empty bytes object.
|
||||||
|
|
||||||
|
If n is zero, return empty bytes object immediately.
|
||||||
|
|
||||||
|
If n is positive, this function try to read `n` bytes, and may return
|
||||||
|
less or equal bytes than requested, but at least one byte. If EOF was
|
||||||
|
received before any byte is read, this function returns empty byte
|
||||||
|
object.
|
||||||
|
|
||||||
|
Returned value is not limited with limit, configured at stream
|
||||||
|
creation.
|
||||||
|
|
||||||
|
If stream was paused, this function will automatically resume it if
|
||||||
|
needed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self._exception is not None:
|
||||||
|
raise self._exception
|
||||||
|
|
||||||
|
if n == 0:
|
||||||
|
return b''
|
||||||
|
|
||||||
|
if n < 0:
|
||||||
|
# This used to just loop creating a new waiter hoping to
|
||||||
|
# collect everything in self._buffer, but that would
|
||||||
|
# deadlock if the subprocess sends more than self.limit
|
||||||
|
# bytes. So just call self.read(self._limit) until EOF.
|
||||||
|
blocks = []
|
||||||
|
while True:
|
||||||
|
block = await self.read(self._limit)
|
||||||
|
if not block:
|
||||||
|
break
|
||||||
|
blocks.append(block)
|
||||||
|
return b''.join(blocks)
|
||||||
|
|
||||||
|
if not self._buffer and not self._eof:
|
||||||
|
await self._wait_for_data('read')
|
||||||
|
|
||||||
|
# This will work right even if buffer is less than n bytes
|
||||||
|
data = bytes(self._buffer[:n])
|
||||||
|
del self._buffer[:n]
|
||||||
|
|
||||||
|
self._maybe_resume_transport()
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def readexactly(self, n):
|
||||||
|
"""Read exactly `n` bytes.
|
||||||
|
|
||||||
|
Raise an IncompleteReadError if EOF is reached before `n` bytes can be
|
||||||
|
read. The IncompleteReadError.partial attribute of the exception will
|
||||||
|
contain the partial read bytes.
|
||||||
|
|
||||||
|
if n is zero, return empty bytes object.
|
||||||
|
|
||||||
|
Returned value is not limited with limit, configured at stream
|
||||||
|
creation.
|
||||||
|
|
||||||
|
If stream was paused, this function will automatically resume it if
|
||||||
|
needed.
|
||||||
|
"""
|
||||||
|
if n < 0:
|
||||||
|
raise ValueError('readexactly size can not be less than zero')
|
||||||
|
|
||||||
|
if self._exception is not None:
|
||||||
|
raise self._exception
|
||||||
|
|
||||||
|
if n == 0:
|
||||||
|
return b''
|
||||||
|
|
||||||
|
while len(self._buffer) < n:
|
||||||
|
if self._eof:
|
||||||
|
incomplete = bytes(self._buffer)
|
||||||
|
self._buffer.clear()
|
||||||
|
raise IncompleteReadError(incomplete, n)
|
||||||
|
|
||||||
|
await self._wait_for_data('readexactly')
|
||||||
|
|
||||||
|
if len(self._buffer) == n:
|
||||||
|
data = bytes(self._buffer)
|
||||||
|
self._buffer.clear()
|
||||||
|
else:
|
||||||
|
data = bytes(self._buffer[:n])
|
||||||
|
del self._buffer[:n]
|
||||||
|
self._maybe_resume_transport()
|
||||||
|
return data
|
||||||
|
|
||||||
|
def __aiter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
async def __anext__(self):
|
||||||
|
val = await self.readline()
|
||||||
|
if val == b'':
|
||||||
|
raise StopAsyncIteration
|
||||||
|
return val
|
218
Lib/asyncio/subprocess.py
Normal file
218
Lib/asyncio/subprocess.py
Normal file
|
@ -0,0 +1,218 @@
|
||||||
|
__all__ = 'create_subprocess_exec', 'create_subprocess_shell'
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from . import events
|
||||||
|
from . import protocols
|
||||||
|
from . import streams
|
||||||
|
from . import tasks
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
PIPE = subprocess.PIPE
|
||||||
|
STDOUT = subprocess.STDOUT
|
||||||
|
DEVNULL = subprocess.DEVNULL
|
||||||
|
|
||||||
|
|
||||||
|
class SubprocessStreamProtocol(streams.FlowControlMixin,
|
||||||
|
protocols.SubprocessProtocol):
|
||||||
|
"""Like StreamReaderProtocol, but for a subprocess."""
|
||||||
|
|
||||||
|
def __init__(self, limit, loop):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
self._limit = limit
|
||||||
|
self.stdin = self.stdout = self.stderr = None
|
||||||
|
self._transport = None
|
||||||
|
self._process_exited = False
|
||||||
|
self._pipe_fds = []
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
info = [self.__class__.__name__]
|
||||||
|
if self.stdin is not None:
|
||||||
|
info.append(f'stdin={self.stdin!r}')
|
||||||
|
if self.stdout is not None:
|
||||||
|
info.append(f'stdout={self.stdout!r}')
|
||||||
|
if self.stderr is not None:
|
||||||
|
info.append(f'stderr={self.stderr!r}')
|
||||||
|
return '<{}>'.format(' '.join(info))
|
||||||
|
|
||||||
|
def connection_made(self, transport):
|
||||||
|
self._transport = transport
|
||||||
|
|
||||||
|
stdout_transport = transport.get_pipe_transport(1)
|
||||||
|
if stdout_transport is not None:
|
||||||
|
self.stdout = streams.StreamReader(limit=self._limit,
|
||||||
|
loop=self._loop)
|
||||||
|
self.stdout.set_transport(stdout_transport)
|
||||||
|
self._pipe_fds.append(1)
|
||||||
|
|
||||||
|
stderr_transport = transport.get_pipe_transport(2)
|
||||||
|
if stderr_transport is not None:
|
||||||
|
self.stderr = streams.StreamReader(limit=self._limit,
|
||||||
|
loop=self._loop)
|
||||||
|
self.stderr.set_transport(stderr_transport)
|
||||||
|
self._pipe_fds.append(2)
|
||||||
|
|
||||||
|
stdin_transport = transport.get_pipe_transport(0)
|
||||||
|
if stdin_transport is not None:
|
||||||
|
self.stdin = streams.StreamWriter(stdin_transport,
|
||||||
|
protocol=self,
|
||||||
|
reader=None,
|
||||||
|
loop=self._loop)
|
||||||
|
|
||||||
|
def pipe_data_received(self, fd, data):
|
||||||
|
if fd == 1:
|
||||||
|
reader = self.stdout
|
||||||
|
elif fd == 2:
|
||||||
|
reader = self.stderr
|
||||||
|
else:
|
||||||
|
reader = None
|
||||||
|
if reader is not None:
|
||||||
|
reader.feed_data(data)
|
||||||
|
|
||||||
|
def pipe_connection_lost(self, fd, exc):
|
||||||
|
if fd == 0:
|
||||||
|
pipe = self.stdin
|
||||||
|
if pipe is not None:
|
||||||
|
pipe.close()
|
||||||
|
self.connection_lost(exc)
|
||||||
|
return
|
||||||
|
if fd == 1:
|
||||||
|
reader = self.stdout
|
||||||
|
elif fd == 2:
|
||||||
|
reader = self.stderr
|
||||||
|
else:
|
||||||
|
reader = None
|
||||||
|
if reader is not None:
|
||||||
|
if exc is None:
|
||||||
|
reader.feed_eof()
|
||||||
|
else:
|
||||||
|
reader.set_exception(exc)
|
||||||
|
|
||||||
|
if fd in self._pipe_fds:
|
||||||
|
self._pipe_fds.remove(fd)
|
||||||
|
self._maybe_close_transport()
|
||||||
|
|
||||||
|
def process_exited(self):
|
||||||
|
self._process_exited = True
|
||||||
|
self._maybe_close_transport()
|
||||||
|
|
||||||
|
def _maybe_close_transport(self):
|
||||||
|
if len(self._pipe_fds) == 0 and self._process_exited:
|
||||||
|
self._transport.close()
|
||||||
|
self._transport = None
|
||||||
|
|
||||||
|
|
||||||
|
class Process:
|
||||||
|
def __init__(self, transport, protocol, loop):
|
||||||
|
self._transport = transport
|
||||||
|
self._protocol = protocol
|
||||||
|
self._loop = loop
|
||||||
|
self.stdin = protocol.stdin
|
||||||
|
self.stdout = protocol.stdout
|
||||||
|
self.stderr = protocol.stderr
|
||||||
|
self.pid = transport.get_pid()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f'<{self.__class__.__name__} {self.pid}>'
|
||||||
|
|
||||||
|
@property
|
||||||
|
def returncode(self):
|
||||||
|
return self._transport.get_returncode()
|
||||||
|
|
||||||
|
async def wait(self):
|
||||||
|
"""Wait until the process exit and return the process return code."""
|
||||||
|
return await self._transport._wait()
|
||||||
|
|
||||||
|
def send_signal(self, signal):
|
||||||
|
self._transport.send_signal(signal)
|
||||||
|
|
||||||
|
def terminate(self):
|
||||||
|
self._transport.terminate()
|
||||||
|
|
||||||
|
def kill(self):
|
||||||
|
self._transport.kill()
|
||||||
|
|
||||||
|
async def _feed_stdin(self, input):
|
||||||
|
debug = self._loop.get_debug()
|
||||||
|
self.stdin.write(input)
|
||||||
|
if debug:
|
||||||
|
logger.debug(
|
||||||
|
'%r communicate: feed stdin (%s bytes)', self, len(input))
|
||||||
|
try:
|
||||||
|
await self.stdin.drain()
|
||||||
|
except (BrokenPipeError, ConnectionResetError) as exc:
|
||||||
|
# communicate() ignores BrokenPipeError and ConnectionResetError
|
||||||
|
if debug:
|
||||||
|
logger.debug('%r communicate: stdin got %r', self, exc)
|
||||||
|
|
||||||
|
if debug:
|
||||||
|
logger.debug('%r communicate: close stdin', self)
|
||||||
|
self.stdin.close()
|
||||||
|
|
||||||
|
async def _noop(self):
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _read_stream(self, fd):
|
||||||
|
transport = self._transport.get_pipe_transport(fd)
|
||||||
|
if fd == 2:
|
||||||
|
stream = self.stderr
|
||||||
|
else:
|
||||||
|
assert fd == 1
|
||||||
|
stream = self.stdout
|
||||||
|
if self._loop.get_debug():
|
||||||
|
name = 'stdout' if fd == 1 else 'stderr'
|
||||||
|
logger.debug('%r communicate: read %s', self, name)
|
||||||
|
output = await stream.read()
|
||||||
|
if self._loop.get_debug():
|
||||||
|
name = 'stdout' if fd == 1 else 'stderr'
|
||||||
|
logger.debug('%r communicate: close %s', self, name)
|
||||||
|
transport.close()
|
||||||
|
return output
|
||||||
|
|
||||||
|
async def communicate(self, input=None):
|
||||||
|
if input is not None:
|
||||||
|
stdin = self._feed_stdin(input)
|
||||||
|
else:
|
||||||
|
stdin = self._noop()
|
||||||
|
if self.stdout is not None:
|
||||||
|
stdout = self._read_stream(1)
|
||||||
|
else:
|
||||||
|
stdout = self._noop()
|
||||||
|
if self.stderr is not None:
|
||||||
|
stderr = self._read_stream(2)
|
||||||
|
else:
|
||||||
|
stderr = self._noop()
|
||||||
|
stdin, stdout, stderr = await tasks.gather(stdin, stdout, stderr,
|
||||||
|
loop=self._loop)
|
||||||
|
await self.wait()
|
||||||
|
return (stdout, stderr)
|
||||||
|
|
||||||
|
|
||||||
|
async def create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None,
|
||||||
|
loop=None, limit=streams._DEFAULT_LIMIT,
|
||||||
|
**kwds):
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
|
||||||
|
loop=loop)
|
||||||
|
transport, protocol = await loop.subprocess_shell(
|
||||||
|
protocol_factory,
|
||||||
|
cmd, stdin=stdin, stdout=stdout,
|
||||||
|
stderr=stderr, **kwds)
|
||||||
|
return Process(transport, protocol, loop)
|
||||||
|
|
||||||
|
|
||||||
|
async def create_subprocess_exec(program, *args, stdin=None, stdout=None,
|
||||||
|
stderr=None, loop=None,
|
||||||
|
limit=streams._DEFAULT_LIMIT, **kwds):
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
|
||||||
|
loop=loop)
|
||||||
|
transport, protocol = await loop.subprocess_exec(
|
||||||
|
protocol_factory,
|
||||||
|
program, *args,
|
||||||
|
stdin=stdin, stdout=stdout,
|
||||||
|
stderr=stderr, **kwds)
|
||||||
|
return Process(transport, protocol, loop)
|
867
Lib/asyncio/tasks.py
Normal file
867
Lib/asyncio/tasks.py
Normal file
|
@ -0,0 +1,867 @@
|
||||||
|
"""Support for tasks, coroutines and the scheduler."""
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'Task', 'create_task',
|
||||||
|
'FIRST_COMPLETED', 'FIRST_EXCEPTION', 'ALL_COMPLETED',
|
||||||
|
'wait', 'wait_for', 'as_completed', 'sleep',
|
||||||
|
'gather', 'shield', 'ensure_future', 'run_coroutine_threadsafe',
|
||||||
|
'current_task', 'all_tasks',
|
||||||
|
'_register_task', '_unregister_task', '_enter_task', '_leave_task',
|
||||||
|
)
|
||||||
|
|
||||||
|
import concurrent.futures
|
||||||
|
import contextvars
|
||||||
|
import functools
|
||||||
|
import inspect
|
||||||
|
import types
|
||||||
|
import warnings
|
||||||
|
import weakref
|
||||||
|
|
||||||
|
from . import base_tasks
|
||||||
|
from . import coroutines
|
||||||
|
from . import events
|
||||||
|
from . import futures
|
||||||
|
from .coroutines import coroutine
|
||||||
|
|
||||||
|
|
||||||
|
def current_task(loop=None):
|
||||||
|
"""Return a currently executed task."""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_running_loop()
|
||||||
|
return _current_tasks.get(loop)
|
||||||
|
|
||||||
|
|
||||||
|
def all_tasks(loop=None):
|
||||||
|
"""Return a set of all tasks for the loop."""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_running_loop()
|
||||||
|
# NB: set(_all_tasks) is required to protect
|
||||||
|
# from https://bugs.python.org/issue34970 bug
|
||||||
|
return {t for t in list(_all_tasks)
|
||||||
|
if futures._get_loop(t) is loop and not t.done()}
|
||||||
|
|
||||||
|
|
||||||
|
def _all_tasks_compat(loop=None):
|
||||||
|
# Different from "all_task()" by returning *all* Tasks, including
|
||||||
|
# the completed ones. Used to implement deprecated "Tasks.all_task()"
|
||||||
|
# method.
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
# NB: set(_all_tasks) is required to protect
|
||||||
|
# from https://bugs.python.org/issue34970 bug
|
||||||
|
return {t for t in list(_all_tasks) if futures._get_loop(t) is loop}
|
||||||
|
|
||||||
|
|
||||||
|
class Task(futures._PyFuture): # Inherit Python Task implementation
|
||||||
|
# from a Python Future implementation.
|
||||||
|
|
||||||
|
"""A coroutine wrapped in a Future."""
|
||||||
|
|
||||||
|
# An important invariant maintained while a Task not done:
|
||||||
|
#
|
||||||
|
# - Either _fut_waiter is None, and _step() is scheduled;
|
||||||
|
# - or _fut_waiter is some Future, and _step() is *not* scheduled.
|
||||||
|
#
|
||||||
|
# The only transition from the latter to the former is through
|
||||||
|
# _wakeup(). When _fut_waiter is not None, one of its callbacks
|
||||||
|
# must be _wakeup().
|
||||||
|
|
||||||
|
# If False, don't log a message if the task is destroyed whereas its
|
||||||
|
# status is still pending
|
||||||
|
_log_destroy_pending = True
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def current_task(cls, loop=None):
|
||||||
|
"""Return the currently running task in an event loop or None.
|
||||||
|
|
||||||
|
By default the current task for the current event loop is returned.
|
||||||
|
|
||||||
|
None is returned when called not in the context of a Task.
|
||||||
|
"""
|
||||||
|
warnings.warn("Task.current_task() is deprecated, "
|
||||||
|
"use asyncio.current_task() instead",
|
||||||
|
PendingDeprecationWarning,
|
||||||
|
stacklevel=2)
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
return current_task(loop)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def all_tasks(cls, loop=None):
|
||||||
|
"""Return a set of all tasks for an event loop.
|
||||||
|
|
||||||
|
By default all tasks for the current event loop are returned.
|
||||||
|
"""
|
||||||
|
warnings.warn("Task.all_tasks() is deprecated, "
|
||||||
|
"use asyncio.all_tasks() instead",
|
||||||
|
PendingDeprecationWarning,
|
||||||
|
stacklevel=2)
|
||||||
|
return _all_tasks_compat(loop)
|
||||||
|
|
||||||
|
def __init__(self, coro, *, loop=None):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
if self._source_traceback:
|
||||||
|
del self._source_traceback[-1]
|
||||||
|
if not coroutines.iscoroutine(coro):
|
||||||
|
# raise after Future.__init__(), attrs are required for __del__
|
||||||
|
# prevent logging for pending task in __del__
|
||||||
|
self._log_destroy_pending = False
|
||||||
|
raise TypeError(f"a coroutine was expected, got {coro!r}")
|
||||||
|
|
||||||
|
self._must_cancel = False
|
||||||
|
self._fut_waiter = None
|
||||||
|
self._coro = coro
|
||||||
|
self._context = contextvars.copy_context()
|
||||||
|
|
||||||
|
self._loop.call_soon(self.__step, context=self._context)
|
||||||
|
_register_task(self)
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if self._state == futures._PENDING and self._log_destroy_pending:
|
||||||
|
context = {
|
||||||
|
'task': self,
|
||||||
|
'message': 'Task was destroyed but it is pending!',
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
super().__del__()
|
||||||
|
|
||||||
|
def _repr_info(self):
|
||||||
|
return base_tasks._task_repr_info(self)
|
||||||
|
|
||||||
|
def set_result(self, result):
|
||||||
|
raise RuntimeError('Task does not support set_result operation')
|
||||||
|
|
||||||
|
def set_exception(self, exception):
|
||||||
|
raise RuntimeError('Task does not support set_exception operation')
|
||||||
|
|
||||||
|
def get_stack(self, *, limit=None):
|
||||||
|
"""Return the list of stack frames for this task's coroutine.
|
||||||
|
|
||||||
|
If the coroutine is not done, this returns the stack where it is
|
||||||
|
suspended. If the coroutine has completed successfully or was
|
||||||
|
cancelled, this returns an empty list. If the coroutine was
|
||||||
|
terminated by an exception, this returns the list of traceback
|
||||||
|
frames.
|
||||||
|
|
||||||
|
The frames are always ordered from oldest to newest.
|
||||||
|
|
||||||
|
The optional limit gives the maximum number of frames to
|
||||||
|
return; by default all available frames are returned. Its
|
||||||
|
meaning differs depending on whether a stack or a traceback is
|
||||||
|
returned: the newest frames of a stack are returned, but the
|
||||||
|
oldest frames of a traceback are returned. (This matches the
|
||||||
|
behavior of the traceback module.)
|
||||||
|
|
||||||
|
For reasons beyond our control, only one stack frame is
|
||||||
|
returned for a suspended coroutine.
|
||||||
|
"""
|
||||||
|
return base_tasks._task_get_stack(self, limit)
|
||||||
|
|
||||||
|
def print_stack(self, *, limit=None, file=None):
|
||||||
|
"""Print the stack or traceback for this task's coroutine.
|
||||||
|
|
||||||
|
This produces output similar to that of the traceback module,
|
||||||
|
for the frames retrieved by get_stack(). The limit argument
|
||||||
|
is passed to get_stack(). The file argument is an I/O stream
|
||||||
|
to which the output is written; by default output is written
|
||||||
|
to sys.stderr.
|
||||||
|
"""
|
||||||
|
return base_tasks._task_print_stack(self, limit, file)
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
"""Request that this task cancel itself.
|
||||||
|
|
||||||
|
This arranges for a CancelledError to be thrown into the
|
||||||
|
wrapped coroutine on the next cycle through the event loop.
|
||||||
|
The coroutine then has a chance to clean up or even deny
|
||||||
|
the request using try/except/finally.
|
||||||
|
|
||||||
|
Unlike Future.cancel, this does not guarantee that the
|
||||||
|
task will be cancelled: the exception might be caught and
|
||||||
|
acted upon, delaying cancellation of the task or preventing
|
||||||
|
cancellation completely. The task may also return a value or
|
||||||
|
raise a different exception.
|
||||||
|
|
||||||
|
Immediately after this method is called, Task.cancelled() will
|
||||||
|
not return True (unless the task was already cancelled). A
|
||||||
|
task will be marked as cancelled when the wrapped coroutine
|
||||||
|
terminates with a CancelledError exception (even if cancel()
|
||||||
|
was not called).
|
||||||
|
"""
|
||||||
|
self._log_traceback = False
|
||||||
|
if self.done():
|
||||||
|
return False
|
||||||
|
if self._fut_waiter is not None:
|
||||||
|
if self._fut_waiter.cancel():
|
||||||
|
# Leave self._fut_waiter; it may be a Task that
|
||||||
|
# catches and ignores the cancellation so we may have
|
||||||
|
# to cancel it again later.
|
||||||
|
return True
|
||||||
|
# It must be the case that self.__step is already scheduled.
|
||||||
|
self._must_cancel = True
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __step(self, exc=None):
|
||||||
|
if self.done():
|
||||||
|
raise futures.InvalidStateError(
|
||||||
|
f'_step(): already done: {self!r}, {exc!r}')
|
||||||
|
if self._must_cancel:
|
||||||
|
if not isinstance(exc, futures.CancelledError):
|
||||||
|
exc = futures.CancelledError()
|
||||||
|
self._must_cancel = False
|
||||||
|
coro = self._coro
|
||||||
|
self._fut_waiter = None
|
||||||
|
|
||||||
|
_enter_task(self._loop, self)
|
||||||
|
# Call either coro.throw(exc) or coro.send(None).
|
||||||
|
try:
|
||||||
|
if exc is None:
|
||||||
|
# We use the `send` method directly, because coroutines
|
||||||
|
# don't have `__iter__` and `__next__` methods.
|
||||||
|
result = coro.send(None)
|
||||||
|
else:
|
||||||
|
result = coro.throw(exc)
|
||||||
|
except StopIteration as exc:
|
||||||
|
if self._must_cancel:
|
||||||
|
# Task is cancelled right before coro stops.
|
||||||
|
self._must_cancel = False
|
||||||
|
super().set_exception(futures.CancelledError())
|
||||||
|
else:
|
||||||
|
super().set_result(exc.value)
|
||||||
|
except futures.CancelledError:
|
||||||
|
super().cancel() # I.e., Future.cancel(self).
|
||||||
|
except Exception as exc:
|
||||||
|
super().set_exception(exc)
|
||||||
|
except BaseException as exc:
|
||||||
|
super().set_exception(exc)
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
blocking = getattr(result, '_asyncio_future_blocking', None)
|
||||||
|
if blocking is not None:
|
||||||
|
# Yielded Future must come from Future.__iter__().
|
||||||
|
if futures._get_loop(result) is not self._loop:
|
||||||
|
new_exc = RuntimeError(
|
||||||
|
f'Task {self!r} got Future '
|
||||||
|
f'{result!r} attached to a different loop')
|
||||||
|
self._loop.call_soon(
|
||||||
|
self.__step, new_exc, context=self._context)
|
||||||
|
elif blocking:
|
||||||
|
if result is self:
|
||||||
|
new_exc = RuntimeError(
|
||||||
|
f'Task cannot await on itself: {self!r}')
|
||||||
|
self._loop.call_soon(
|
||||||
|
self.__step, new_exc, context=self._context)
|
||||||
|
else:
|
||||||
|
result._asyncio_future_blocking = False
|
||||||
|
result.add_done_callback(
|
||||||
|
self.__wakeup, context=self._context)
|
||||||
|
self._fut_waiter = result
|
||||||
|
if self._must_cancel:
|
||||||
|
if self._fut_waiter.cancel():
|
||||||
|
self._must_cancel = False
|
||||||
|
else:
|
||||||
|
new_exc = RuntimeError(
|
||||||
|
f'yield was used instead of yield from '
|
||||||
|
f'in task {self!r} with {result!r}')
|
||||||
|
self._loop.call_soon(
|
||||||
|
self.__step, new_exc, context=self._context)
|
||||||
|
|
||||||
|
elif result is None:
|
||||||
|
# Bare yield relinquishes control for one event loop iteration.
|
||||||
|
self._loop.call_soon(self.__step, context=self._context)
|
||||||
|
elif inspect.isgenerator(result):
|
||||||
|
# Yielding a generator is just wrong.
|
||||||
|
new_exc = RuntimeError(
|
||||||
|
f'yield was used instead of yield from for '
|
||||||
|
f'generator in task {self!r} with {result!r}')
|
||||||
|
self._loop.call_soon(
|
||||||
|
self.__step, new_exc, context=self._context)
|
||||||
|
else:
|
||||||
|
# Yielding something else is an error.
|
||||||
|
new_exc = RuntimeError(f'Task got bad yield: {result!r}')
|
||||||
|
self._loop.call_soon(
|
||||||
|
self.__step, new_exc, context=self._context)
|
||||||
|
finally:
|
||||||
|
_leave_task(self._loop, self)
|
||||||
|
self = None # Needed to break cycles when an exception occurs.
|
||||||
|
|
||||||
|
def __wakeup(self, future):
|
||||||
|
try:
|
||||||
|
future.result()
|
||||||
|
except Exception as exc:
|
||||||
|
# This may also be a cancellation.
|
||||||
|
self.__step(exc)
|
||||||
|
else:
|
||||||
|
# Don't pass the value of `future.result()` explicitly,
|
||||||
|
# as `Future.__iter__` and `Future.__await__` don't need it.
|
||||||
|
# If we call `_step(value, None)` instead of `_step()`,
|
||||||
|
# Python eval loop would use `.send(value)` method call,
|
||||||
|
# instead of `__next__()`, which is slower for futures
|
||||||
|
# that return non-generator iterators from their `__iter__`.
|
||||||
|
self.__step()
|
||||||
|
self = None # Needed to break cycles when an exception occurs.
|
||||||
|
|
||||||
|
|
||||||
|
_PyTask = Task
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
import _asyncio
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# _CTask is needed for tests.
|
||||||
|
Task = _CTask = _asyncio.Task
|
||||||
|
|
||||||
|
|
||||||
|
def create_task(coro):
|
||||||
|
"""Schedule the execution of a coroutine object in a spawn task.
|
||||||
|
|
||||||
|
Return a Task object.
|
||||||
|
"""
|
||||||
|
loop = events.get_running_loop()
|
||||||
|
return loop.create_task(coro)
|
||||||
|
|
||||||
|
|
||||||
|
# wait() and as_completed() similar to those in PEP 3148.
|
||||||
|
|
||||||
|
FIRST_COMPLETED = concurrent.futures.FIRST_COMPLETED
|
||||||
|
FIRST_EXCEPTION = concurrent.futures.FIRST_EXCEPTION
|
||||||
|
ALL_COMPLETED = concurrent.futures.ALL_COMPLETED
|
||||||
|
|
||||||
|
|
||||||
|
async def wait(fs, *, loop=None, timeout=None, return_when=ALL_COMPLETED):
|
||||||
|
"""Wait for the Futures and coroutines given by fs to complete.
|
||||||
|
|
||||||
|
The sequence futures must not be empty.
|
||||||
|
|
||||||
|
Coroutines will be wrapped in Tasks.
|
||||||
|
|
||||||
|
Returns two sets of Future: (done, pending).
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
done, pending = await asyncio.wait(fs)
|
||||||
|
|
||||||
|
Note: This does not raise TimeoutError! Futures that aren't done
|
||||||
|
when the timeout occurs are returned in the second set.
|
||||||
|
"""
|
||||||
|
if futures.isfuture(fs) or coroutines.iscoroutine(fs):
|
||||||
|
raise TypeError(f"expect a list of futures, not {type(fs).__name__}")
|
||||||
|
if not fs:
|
||||||
|
raise ValueError('Set of coroutines/Futures is empty.')
|
||||||
|
if return_when not in (FIRST_COMPLETED, FIRST_EXCEPTION, ALL_COMPLETED):
|
||||||
|
raise ValueError(f'Invalid return_when value: {return_when}')
|
||||||
|
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
|
||||||
|
fs = {ensure_future(f, loop=loop) for f in set(fs)}
|
||||||
|
|
||||||
|
return await _wait(fs, timeout, return_when, loop)
|
||||||
|
|
||||||
|
|
||||||
|
def _release_waiter(waiter, *args):
|
||||||
|
if not waiter.done():
|
||||||
|
waiter.set_result(None)
|
||||||
|
|
||||||
|
|
||||||
|
async def wait_for(fut, timeout, *, loop=None):
|
||||||
|
"""Wait for the single Future or coroutine to complete, with timeout.
|
||||||
|
|
||||||
|
Coroutine will be wrapped in Task.
|
||||||
|
|
||||||
|
Returns result of the Future or coroutine. When a timeout occurs,
|
||||||
|
it cancels the task and raises TimeoutError. To avoid the task
|
||||||
|
cancellation, wrap it in shield().
|
||||||
|
|
||||||
|
If the wait is cancelled, the task is also cancelled.
|
||||||
|
|
||||||
|
This function is a coroutine.
|
||||||
|
"""
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
|
||||||
|
if timeout is None:
|
||||||
|
return await fut
|
||||||
|
|
||||||
|
if timeout <= 0:
|
||||||
|
fut = ensure_future(fut, loop=loop)
|
||||||
|
|
||||||
|
if fut.done():
|
||||||
|
return fut.result()
|
||||||
|
|
||||||
|
fut.cancel()
|
||||||
|
raise futures.TimeoutError()
|
||||||
|
|
||||||
|
waiter = loop.create_future()
|
||||||
|
timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
|
||||||
|
cb = functools.partial(_release_waiter, waiter)
|
||||||
|
|
||||||
|
fut = ensure_future(fut, loop=loop)
|
||||||
|
fut.add_done_callback(cb)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# wait until the future completes or the timeout
|
||||||
|
try:
|
||||||
|
await waiter
|
||||||
|
except futures.CancelledError:
|
||||||
|
fut.remove_done_callback(cb)
|
||||||
|
fut.cancel()
|
||||||
|
raise
|
||||||
|
|
||||||
|
if fut.done():
|
||||||
|
return fut.result()
|
||||||
|
else:
|
||||||
|
fut.remove_done_callback(cb)
|
||||||
|
# We must ensure that the task is not running
|
||||||
|
# after wait_for() returns.
|
||||||
|
# See https://bugs.python.org/issue32751
|
||||||
|
await _cancel_and_wait(fut, loop=loop)
|
||||||
|
raise futures.TimeoutError()
|
||||||
|
finally:
|
||||||
|
timeout_handle.cancel()
|
||||||
|
|
||||||
|
|
||||||
|
async def _wait(fs, timeout, return_when, loop):
|
||||||
|
"""Internal helper for wait().
|
||||||
|
|
||||||
|
The fs argument must be a collection of Futures.
|
||||||
|
"""
|
||||||
|
assert fs, 'Set of Futures is empty.'
|
||||||
|
waiter = loop.create_future()
|
||||||
|
timeout_handle = None
|
||||||
|
if timeout is not None:
|
||||||
|
timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
|
||||||
|
counter = len(fs)
|
||||||
|
|
||||||
|
def _on_completion(f):
|
||||||
|
nonlocal counter
|
||||||
|
counter -= 1
|
||||||
|
if (counter <= 0 or
|
||||||
|
return_when == FIRST_COMPLETED or
|
||||||
|
return_when == FIRST_EXCEPTION and (not f.cancelled() and
|
||||||
|
f.exception() is not None)):
|
||||||
|
if timeout_handle is not None:
|
||||||
|
timeout_handle.cancel()
|
||||||
|
if not waiter.done():
|
||||||
|
waiter.set_result(None)
|
||||||
|
|
||||||
|
for f in fs:
|
||||||
|
f.add_done_callback(_on_completion)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await waiter
|
||||||
|
finally:
|
||||||
|
if timeout_handle is not None:
|
||||||
|
timeout_handle.cancel()
|
||||||
|
|
||||||
|
done, pending = set(), set()
|
||||||
|
for f in fs:
|
||||||
|
f.remove_done_callback(_on_completion)
|
||||||
|
if f.done():
|
||||||
|
done.add(f)
|
||||||
|
else:
|
||||||
|
pending.add(f)
|
||||||
|
return done, pending
|
||||||
|
|
||||||
|
|
||||||
|
async def _cancel_and_wait(fut, loop):
|
||||||
|
"""Cancel the *fut* future or task and wait until it completes."""
|
||||||
|
|
||||||
|
waiter = loop.create_future()
|
||||||
|
cb = functools.partial(_release_waiter, waiter)
|
||||||
|
fut.add_done_callback(cb)
|
||||||
|
|
||||||
|
try:
|
||||||
|
fut.cancel()
|
||||||
|
# We cannot wait on *fut* directly to make
|
||||||
|
# sure _cancel_and_wait itself is reliably cancellable.
|
||||||
|
await waiter
|
||||||
|
finally:
|
||||||
|
fut.remove_done_callback(cb)
|
||||||
|
|
||||||
|
|
||||||
|
# This is *not* a @coroutine! It is just an iterator (yielding Futures).
|
||||||
|
def as_completed(fs, *, loop=None, timeout=None):
|
||||||
|
"""Return an iterator whose values are coroutines.
|
||||||
|
|
||||||
|
When waiting for the yielded coroutines you'll get the results (or
|
||||||
|
exceptions!) of the original Futures (or coroutines), in the order
|
||||||
|
in which and as soon as they complete.
|
||||||
|
|
||||||
|
This differs from PEP 3148; the proper way to use this is:
|
||||||
|
|
||||||
|
for f in as_completed(fs):
|
||||||
|
result = await f # The 'await' may raise.
|
||||||
|
# Use result.
|
||||||
|
|
||||||
|
If a timeout is specified, the 'await' will raise
|
||||||
|
TimeoutError when the timeout occurs before all Futures are done.
|
||||||
|
|
||||||
|
Note: The futures 'f' are not necessarily members of fs.
|
||||||
|
"""
|
||||||
|
if futures.isfuture(fs) or coroutines.iscoroutine(fs):
|
||||||
|
raise TypeError(f"expect a list of futures, not {type(fs).__name__}")
|
||||||
|
loop = loop if loop is not None else events.get_event_loop()
|
||||||
|
todo = {ensure_future(f, loop=loop) for f in set(fs)}
|
||||||
|
from .queues import Queue # Import here to avoid circular import problem.
|
||||||
|
done = Queue(loop=loop)
|
||||||
|
timeout_handle = None
|
||||||
|
|
||||||
|
def _on_timeout():
|
||||||
|
for f in todo:
|
||||||
|
f.remove_done_callback(_on_completion)
|
||||||
|
done.put_nowait(None) # Queue a dummy value for _wait_for_one().
|
||||||
|
todo.clear() # Can't do todo.remove(f) in the loop.
|
||||||
|
|
||||||
|
def _on_completion(f):
|
||||||
|
if not todo:
|
||||||
|
return # _on_timeout() was here first.
|
||||||
|
todo.remove(f)
|
||||||
|
done.put_nowait(f)
|
||||||
|
if not todo and timeout_handle is not None:
|
||||||
|
timeout_handle.cancel()
|
||||||
|
|
||||||
|
async def _wait_for_one():
|
||||||
|
f = await done.get()
|
||||||
|
if f is None:
|
||||||
|
# Dummy value from _on_timeout().
|
||||||
|
raise futures.TimeoutError
|
||||||
|
return f.result() # May raise f.exception().
|
||||||
|
|
||||||
|
for f in todo:
|
||||||
|
f.add_done_callback(_on_completion)
|
||||||
|
if todo and timeout is not None:
|
||||||
|
timeout_handle = loop.call_later(timeout, _on_timeout)
|
||||||
|
for _ in range(len(todo)):
|
||||||
|
yield _wait_for_one()
|
||||||
|
|
||||||
|
|
||||||
|
@types.coroutine
|
||||||
|
def __sleep0():
|
||||||
|
"""Skip one event loop run cycle.
|
||||||
|
|
||||||
|
This is a private helper for 'asyncio.sleep()', used
|
||||||
|
when the 'delay' is set to 0. It uses a bare 'yield'
|
||||||
|
expression (which Task.__step knows how to handle)
|
||||||
|
instead of creating a Future object.
|
||||||
|
"""
|
||||||
|
yield
|
||||||
|
|
||||||
|
|
||||||
|
async def sleep(delay, result=None, *, loop=None):
|
||||||
|
"""Coroutine that completes after a given time (in seconds)."""
|
||||||
|
if delay <= 0:
|
||||||
|
await __sleep0()
|
||||||
|
return result
|
||||||
|
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
future = loop.create_future()
|
||||||
|
h = loop.call_later(delay,
|
||||||
|
futures._set_result_unless_cancelled,
|
||||||
|
future, result)
|
||||||
|
try:
|
||||||
|
return await future
|
||||||
|
finally:
|
||||||
|
h.cancel()
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_future(coro_or_future, *, loop=None):
|
||||||
|
"""Wrap a coroutine or an awaitable in a future.
|
||||||
|
|
||||||
|
If the argument is a Future, it is returned directly.
|
||||||
|
"""
|
||||||
|
if coroutines.iscoroutine(coro_or_future):
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
task = loop.create_task(coro_or_future)
|
||||||
|
if task._source_traceback:
|
||||||
|
del task._source_traceback[-1]
|
||||||
|
return task
|
||||||
|
elif futures.isfuture(coro_or_future):
|
||||||
|
if loop is not None and loop is not futures._get_loop(coro_or_future):
|
||||||
|
raise ValueError('loop argument must agree with Future')
|
||||||
|
return coro_or_future
|
||||||
|
elif inspect.isawaitable(coro_or_future):
|
||||||
|
return ensure_future(_wrap_awaitable(coro_or_future), loop=loop)
|
||||||
|
else:
|
||||||
|
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
|
||||||
|
'required')
|
||||||
|
|
||||||
|
|
||||||
|
@coroutine
|
||||||
|
def _wrap_awaitable(awaitable):
|
||||||
|
"""Helper for asyncio.ensure_future().
|
||||||
|
|
||||||
|
Wraps awaitable (an object with __await__) into a coroutine
|
||||||
|
that will later be wrapped in a Task by ensure_future().
|
||||||
|
"""
|
||||||
|
return (yield from awaitable.__await__())
|
||||||
|
|
||||||
|
|
||||||
|
class _GatheringFuture(futures.Future):
|
||||||
|
"""Helper for gather().
|
||||||
|
|
||||||
|
This overrides cancel() to cancel all the children and act more
|
||||||
|
like Task.cancel(), which doesn't immediately mark itself as
|
||||||
|
cancelled.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, children, *, loop=None):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
self._children = children
|
||||||
|
self._cancel_requested = False
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
if self.done():
|
||||||
|
return False
|
||||||
|
ret = False
|
||||||
|
for child in self._children:
|
||||||
|
if child.cancel():
|
||||||
|
ret = True
|
||||||
|
if ret:
|
||||||
|
# If any child tasks were actually cancelled, we should
|
||||||
|
# propagate the cancellation request regardless of
|
||||||
|
# *return_exceptions* argument. See issue 32684.
|
||||||
|
self._cancel_requested = True
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def gather(*coros_or_futures, loop=None, return_exceptions=False):
|
||||||
|
"""Return a future aggregating results from the given coroutines/futures.
|
||||||
|
|
||||||
|
Coroutines will be wrapped in a future and scheduled in the event
|
||||||
|
loop. They will not necessarily be scheduled in the same order as
|
||||||
|
passed in.
|
||||||
|
|
||||||
|
All futures must share the same event loop. If all the tasks are
|
||||||
|
done successfully, the returned future's result is the list of
|
||||||
|
results (in the order of the original sequence, not necessarily
|
||||||
|
the order of results arrival). If *return_exceptions* is True,
|
||||||
|
exceptions in the tasks are treated the same as successful
|
||||||
|
results, and gathered in the result list; otherwise, the first
|
||||||
|
raised exception will be immediately propagated to the returned
|
||||||
|
future.
|
||||||
|
|
||||||
|
Cancellation: if the outer Future is cancelled, all children (that
|
||||||
|
have not completed yet) are also cancelled. If any child is
|
||||||
|
cancelled, this is treated as if it raised CancelledError --
|
||||||
|
the outer Future is *not* cancelled in this case. (This is to
|
||||||
|
prevent the cancellation of one child to cause other children to
|
||||||
|
be cancelled.)
|
||||||
|
"""
|
||||||
|
if not coros_or_futures:
|
||||||
|
if loop is None:
|
||||||
|
loop = events.get_event_loop()
|
||||||
|
outer = loop.create_future()
|
||||||
|
outer.set_result([])
|
||||||
|
return outer
|
||||||
|
|
||||||
|
def _done_callback(fut):
|
||||||
|
nonlocal nfinished
|
||||||
|
nfinished += 1
|
||||||
|
|
||||||
|
if outer.done():
|
||||||
|
if not fut.cancelled():
|
||||||
|
# Mark exception retrieved.
|
||||||
|
fut.exception()
|
||||||
|
return
|
||||||
|
|
||||||
|
if not return_exceptions:
|
||||||
|
if fut.cancelled():
|
||||||
|
# Check if 'fut' is cancelled first, as
|
||||||
|
# 'fut.exception()' will *raise* a CancelledError
|
||||||
|
# instead of returning it.
|
||||||
|
exc = futures.CancelledError()
|
||||||
|
outer.set_exception(exc)
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
exc = fut.exception()
|
||||||
|
if exc is not None:
|
||||||
|
outer.set_exception(exc)
|
||||||
|
return
|
||||||
|
|
||||||
|
if nfinished == nfuts:
|
||||||
|
# All futures are done; create a list of results
|
||||||
|
# and set it to the 'outer' future.
|
||||||
|
results = []
|
||||||
|
|
||||||
|
for fut in children:
|
||||||
|
if fut.cancelled():
|
||||||
|
# Check if 'fut' is cancelled first, as
|
||||||
|
# 'fut.exception()' will *raise* a CancelledError
|
||||||
|
# instead of returning it.
|
||||||
|
res = futures.CancelledError()
|
||||||
|
else:
|
||||||
|
res = fut.exception()
|
||||||
|
if res is None:
|
||||||
|
res = fut.result()
|
||||||
|
results.append(res)
|
||||||
|
|
||||||
|
if outer._cancel_requested:
|
||||||
|
# If gather is being cancelled we must propagate the
|
||||||
|
# cancellation regardless of *return_exceptions* argument.
|
||||||
|
# See issue 32684.
|
||||||
|
outer.set_exception(futures.CancelledError())
|
||||||
|
else:
|
||||||
|
outer.set_result(results)
|
||||||
|
|
||||||
|
arg_to_fut = {}
|
||||||
|
children = []
|
||||||
|
nfuts = 0
|
||||||
|
nfinished = 0
|
||||||
|
for arg in coros_or_futures:
|
||||||
|
if arg not in arg_to_fut:
|
||||||
|
fut = ensure_future(arg, loop=loop)
|
||||||
|
if loop is None:
|
||||||
|
loop = futures._get_loop(fut)
|
||||||
|
if fut is not arg:
|
||||||
|
# 'arg' was not a Future, therefore, 'fut' is a new
|
||||||
|
# Future created specifically for 'arg'. Since the caller
|
||||||
|
# can't control it, disable the "destroy pending task"
|
||||||
|
# warning.
|
||||||
|
fut._log_destroy_pending = False
|
||||||
|
|
||||||
|
nfuts += 1
|
||||||
|
arg_to_fut[arg] = fut
|
||||||
|
fut.add_done_callback(_done_callback)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# There's a duplicate Future object in coros_or_futures.
|
||||||
|
fut = arg_to_fut[arg]
|
||||||
|
|
||||||
|
children.append(fut)
|
||||||
|
|
||||||
|
outer = _GatheringFuture(children, loop=loop)
|
||||||
|
return outer
|
||||||
|
|
||||||
|
|
||||||
|
def shield(arg, *, loop=None):
|
||||||
|
"""Wait for a future, shielding it from cancellation.
|
||||||
|
|
||||||
|
The statement
|
||||||
|
|
||||||
|
res = await shield(something())
|
||||||
|
|
||||||
|
is exactly equivalent to the statement
|
||||||
|
|
||||||
|
res = await something()
|
||||||
|
|
||||||
|
*except* that if the coroutine containing it is cancelled, the
|
||||||
|
task running in something() is not cancelled. From the POV of
|
||||||
|
something(), the cancellation did not happen. But its caller is
|
||||||
|
still cancelled, so the yield-from expression still raises
|
||||||
|
CancelledError. Note: If something() is cancelled by other means
|
||||||
|
this will still cancel shield().
|
||||||
|
|
||||||
|
If you want to completely ignore cancellation (not recommended)
|
||||||
|
you can combine shield() with a try/except clause, as follows:
|
||||||
|
|
||||||
|
try:
|
||||||
|
res = await shield(something())
|
||||||
|
except CancelledError:
|
||||||
|
res = None
|
||||||
|
"""
|
||||||
|
inner = ensure_future(arg, loop=loop)
|
||||||
|
if inner.done():
|
||||||
|
# Shortcut.
|
||||||
|
return inner
|
||||||
|
loop = futures._get_loop(inner)
|
||||||
|
outer = loop.create_future()
|
||||||
|
|
||||||
|
def _done_callback(inner):
|
||||||
|
if outer.cancelled():
|
||||||
|
if not inner.cancelled():
|
||||||
|
# Mark inner's result as retrieved.
|
||||||
|
inner.exception()
|
||||||
|
return
|
||||||
|
|
||||||
|
if inner.cancelled():
|
||||||
|
outer.cancel()
|
||||||
|
else:
|
||||||
|
exc = inner.exception()
|
||||||
|
if exc is not None:
|
||||||
|
outer.set_exception(exc)
|
||||||
|
else:
|
||||||
|
outer.set_result(inner.result())
|
||||||
|
|
||||||
|
inner.add_done_callback(_done_callback)
|
||||||
|
return outer
|
||||||
|
|
||||||
|
|
||||||
|
def run_coroutine_threadsafe(coro, loop):
|
||||||
|
"""Submit a coroutine object to a given event loop.
|
||||||
|
|
||||||
|
Return a concurrent.futures.Future to access the result.
|
||||||
|
"""
|
||||||
|
if not coroutines.iscoroutine(coro):
|
||||||
|
raise TypeError('A coroutine object is required')
|
||||||
|
future = concurrent.futures.Future()
|
||||||
|
|
||||||
|
def callback():
|
||||||
|
try:
|
||||||
|
futures._chain_future(ensure_future(coro, loop=loop), future)
|
||||||
|
except Exception as exc:
|
||||||
|
if future.set_running_or_notify_cancel():
|
||||||
|
future.set_exception(exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
loop.call_soon_threadsafe(callback)
|
||||||
|
return future
|
||||||
|
|
||||||
|
|
||||||
|
# WeakSet containing all alive tasks.
|
||||||
|
_all_tasks = weakref.WeakSet()
|
||||||
|
|
||||||
|
# Dictionary containing tasks that are currently active in
|
||||||
|
# all running event loops. {EventLoop: Task}
|
||||||
|
_current_tasks = {}
|
||||||
|
|
||||||
|
|
||||||
|
def _register_task(task):
|
||||||
|
"""Register a new task in asyncio as executed by loop."""
|
||||||
|
_all_tasks.add(task)
|
||||||
|
|
||||||
|
|
||||||
|
def _enter_task(loop, task):
|
||||||
|
current_task = _current_tasks.get(loop)
|
||||||
|
if current_task is not None:
|
||||||
|
raise RuntimeError(f"Cannot enter into task {task!r} while another "
|
||||||
|
f"task {current_task!r} is being executed.")
|
||||||
|
_current_tasks[loop] = task
|
||||||
|
|
||||||
|
|
||||||
|
def _leave_task(loop, task):
|
||||||
|
current_task = _current_tasks.get(loop)
|
||||||
|
if current_task is not task:
|
||||||
|
raise RuntimeError(f"Leaving task {task!r} does not match "
|
||||||
|
f"the current task {current_task!r}.")
|
||||||
|
del _current_tasks[loop]
|
||||||
|
|
||||||
|
|
||||||
|
def _unregister_task(task):
|
||||||
|
"""Unregister a task."""
|
||||||
|
_all_tasks.discard(task)
|
||||||
|
|
||||||
|
|
||||||
|
_py_register_task = _register_task
|
||||||
|
_py_unregister_task = _unregister_task
|
||||||
|
_py_enter_task = _enter_task
|
||||||
|
_py_leave_task = _leave_task
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
from _asyncio import (_register_task, _unregister_task,
|
||||||
|
_enter_task, _leave_task,
|
||||||
|
_all_tasks, _current_tasks)
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
_c_register_task = _register_task
|
||||||
|
_c_unregister_task = _unregister_task
|
||||||
|
_c_enter_task = _enter_task
|
||||||
|
_c_leave_task = _leave_task
|
311
Lib/asyncio/transports.py
Normal file
311
Lib/asyncio/transports.py
Normal file
|
@ -0,0 +1,311 @@
|
||||||
|
"""Abstract Transport class."""
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'BaseTransport', 'ReadTransport', 'WriteTransport',
|
||||||
|
'Transport', 'DatagramTransport', 'SubprocessTransport',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseTransport:
|
||||||
|
"""Base class for transports."""
|
||||||
|
|
||||||
|
def __init__(self, extra=None):
|
||||||
|
if extra is None:
|
||||||
|
extra = {}
|
||||||
|
self._extra = extra
|
||||||
|
|
||||||
|
def get_extra_info(self, name, default=None):
|
||||||
|
"""Get optional transport information."""
|
||||||
|
return self._extra.get(name, default)
|
||||||
|
|
||||||
|
def is_closing(self):
|
||||||
|
"""Return True if the transport is closing or closed."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""Close the transport.
|
||||||
|
|
||||||
|
Buffered data will be flushed asynchronously. No more data
|
||||||
|
will be received. After all buffered data is flushed, the
|
||||||
|
protocol's connection_lost() method will (eventually) called
|
||||||
|
with None as its argument.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def set_protocol(self, protocol):
|
||||||
|
"""Set a new protocol."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_protocol(self):
|
||||||
|
"""Return the current protocol."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class ReadTransport(BaseTransport):
|
||||||
|
"""Interface for read-only transports."""
|
||||||
|
|
||||||
|
def is_reading(self):
|
||||||
|
"""Return True if the transport is receiving."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def pause_reading(self):
|
||||||
|
"""Pause the receiving end.
|
||||||
|
|
||||||
|
No data will be passed to the protocol's data_received()
|
||||||
|
method until resume_reading() is called.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def resume_reading(self):
|
||||||
|
"""Resume the receiving end.
|
||||||
|
|
||||||
|
Data received will once again be passed to the protocol's
|
||||||
|
data_received() method.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class WriteTransport(BaseTransport):
|
||||||
|
"""Interface for write-only transports."""
|
||||||
|
|
||||||
|
def set_write_buffer_limits(self, high=None, low=None):
|
||||||
|
"""Set the high- and low-water limits for write flow control.
|
||||||
|
|
||||||
|
These two values control when to call the protocol's
|
||||||
|
pause_writing() and resume_writing() methods. If specified,
|
||||||
|
the low-water limit must be less than or equal to the
|
||||||
|
high-water limit. Neither value can be negative.
|
||||||
|
|
||||||
|
The defaults are implementation-specific. If only the
|
||||||
|
high-water limit is given, the low-water limit defaults to an
|
||||||
|
implementation-specific value less than or equal to the
|
||||||
|
high-water limit. Setting high to zero forces low to zero as
|
||||||
|
well, and causes pause_writing() to be called whenever the
|
||||||
|
buffer becomes non-empty. Setting low to zero causes
|
||||||
|
resume_writing() to be called only once the buffer is empty.
|
||||||
|
Use of zero for either limit is generally sub-optimal as it
|
||||||
|
reduces opportunities for doing I/O and computation
|
||||||
|
concurrently.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_write_buffer_size(self):
|
||||||
|
"""Return the current size of the write buffer."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
"""Write some data bytes to the transport.
|
||||||
|
|
||||||
|
This does not block; it buffers the data and arranges for it
|
||||||
|
to be sent out asynchronously.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def writelines(self, list_of_data):
|
||||||
|
"""Write a list (or any iterable) of data bytes to the transport.
|
||||||
|
|
||||||
|
The default implementation concatenates the arguments and
|
||||||
|
calls write() on the result.
|
||||||
|
"""
|
||||||
|
data = b''.join(list_of_data)
|
||||||
|
self.write(data)
|
||||||
|
|
||||||
|
def write_eof(self):
|
||||||
|
"""Close the write end after flushing buffered data.
|
||||||
|
|
||||||
|
(This is like typing ^D into a UNIX program reading from stdin.)
|
||||||
|
|
||||||
|
Data may still be received.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def can_write_eof(self):
|
||||||
|
"""Return True if this transport supports write_eof(), False if not."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def abort(self):
|
||||||
|
"""Close the transport immediately.
|
||||||
|
|
||||||
|
Buffered data will be lost. No more data will be received.
|
||||||
|
The protocol's connection_lost() method will (eventually) be
|
||||||
|
called with None as its argument.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class Transport(ReadTransport, WriteTransport):
|
||||||
|
"""Interface representing a bidirectional transport.
|
||||||
|
|
||||||
|
There may be several implementations, but typically, the user does
|
||||||
|
not implement new transports; rather, the platform provides some
|
||||||
|
useful transports that are implemented using the platform's best
|
||||||
|
practices.
|
||||||
|
|
||||||
|
The user never instantiates a transport directly; they call a
|
||||||
|
utility function, passing it a protocol factory and other
|
||||||
|
information necessary to create the transport and protocol. (E.g.
|
||||||
|
EventLoop.create_connection() or EventLoop.create_server().)
|
||||||
|
|
||||||
|
The utility function will asynchronously create a transport and a
|
||||||
|
protocol and hook them up by calling the protocol's
|
||||||
|
connection_made() method, passing it the transport.
|
||||||
|
|
||||||
|
The implementation here raises NotImplemented for every method
|
||||||
|
except writelines(), which calls write() in a loop.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class DatagramTransport(BaseTransport):
|
||||||
|
"""Interface for datagram (UDP) transports."""
|
||||||
|
|
||||||
|
def sendto(self, data, addr=None):
|
||||||
|
"""Send data to the transport.
|
||||||
|
|
||||||
|
This does not block; it buffers the data and arranges for it
|
||||||
|
to be sent out asynchronously.
|
||||||
|
addr is target socket address.
|
||||||
|
If addr is None use target address pointed on transport creation.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def abort(self):
|
||||||
|
"""Close the transport immediately.
|
||||||
|
|
||||||
|
Buffered data will be lost. No more data will be received.
|
||||||
|
The protocol's connection_lost() method will (eventually) be
|
||||||
|
called with None as its argument.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class SubprocessTransport(BaseTransport):
|
||||||
|
|
||||||
|
def get_pid(self):
|
||||||
|
"""Get subprocess id."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_returncode(self):
|
||||||
|
"""Get subprocess returncode.
|
||||||
|
|
||||||
|
See also
|
||||||
|
http://docs.python.org/3/library/subprocess#subprocess.Popen.returncode
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_pipe_transport(self, fd):
|
||||||
|
"""Get transport for pipe with number fd."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def send_signal(self, signal):
|
||||||
|
"""Send signal to subprocess.
|
||||||
|
|
||||||
|
See also:
|
||||||
|
docs.python.org/3/library/subprocess#subprocess.Popen.send_signal
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def terminate(self):
|
||||||
|
"""Stop the subprocess.
|
||||||
|
|
||||||
|
Alias for close() method.
|
||||||
|
|
||||||
|
On Posix OSs the method sends SIGTERM to the subprocess.
|
||||||
|
On Windows the Win32 API function TerminateProcess()
|
||||||
|
is called to stop the subprocess.
|
||||||
|
|
||||||
|
See also:
|
||||||
|
http://docs.python.org/3/library/subprocess#subprocess.Popen.terminate
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def kill(self):
|
||||||
|
"""Kill the subprocess.
|
||||||
|
|
||||||
|
On Posix OSs the function sends SIGKILL to the subprocess.
|
||||||
|
On Windows kill() is an alias for terminate().
|
||||||
|
|
||||||
|
See also:
|
||||||
|
http://docs.python.org/3/library/subprocess#subprocess.Popen.kill
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class _FlowControlMixin(Transport):
|
||||||
|
"""All the logic for (write) flow control in a mix-in base class.
|
||||||
|
|
||||||
|
The subclass must implement get_write_buffer_size(). It must call
|
||||||
|
_maybe_pause_protocol() whenever the write buffer size increases,
|
||||||
|
and _maybe_resume_protocol() whenever it decreases. It may also
|
||||||
|
override set_write_buffer_limits() (e.g. to specify different
|
||||||
|
defaults).
|
||||||
|
|
||||||
|
The subclass constructor must call super().__init__(extra). This
|
||||||
|
will call set_write_buffer_limits().
|
||||||
|
|
||||||
|
The user may call set_write_buffer_limits() and
|
||||||
|
get_write_buffer_size(), and their protocol's pause_writing() and
|
||||||
|
resume_writing() may be called.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, extra=None, loop=None):
|
||||||
|
super().__init__(extra)
|
||||||
|
assert loop is not None
|
||||||
|
self._loop = loop
|
||||||
|
self._protocol_paused = False
|
||||||
|
self._set_write_buffer_limits()
|
||||||
|
|
||||||
|
def _maybe_pause_protocol(self):
|
||||||
|
size = self.get_write_buffer_size()
|
||||||
|
if size <= self._high_water:
|
||||||
|
return
|
||||||
|
if not self._protocol_paused:
|
||||||
|
self._protocol_paused = True
|
||||||
|
try:
|
||||||
|
self._protocol.pause_writing()
|
||||||
|
except Exception as exc:
|
||||||
|
self._loop.call_exception_handler({
|
||||||
|
'message': 'protocol.pause_writing() failed',
|
||||||
|
'exception': exc,
|
||||||
|
'transport': self,
|
||||||
|
'protocol': self._protocol,
|
||||||
|
})
|
||||||
|
|
||||||
|
def _maybe_resume_protocol(self):
|
||||||
|
if (self._protocol_paused and
|
||||||
|
self.get_write_buffer_size() <= self._low_water):
|
||||||
|
self._protocol_paused = False
|
||||||
|
try:
|
||||||
|
self._protocol.resume_writing()
|
||||||
|
except Exception as exc:
|
||||||
|
self._loop.call_exception_handler({
|
||||||
|
'message': 'protocol.resume_writing() failed',
|
||||||
|
'exception': exc,
|
||||||
|
'transport': self,
|
||||||
|
'protocol': self._protocol,
|
||||||
|
})
|
||||||
|
|
||||||
|
def get_write_buffer_limits(self):
|
||||||
|
return (self._low_water, self._high_water)
|
||||||
|
|
||||||
|
def _set_write_buffer_limits(self, high=None, low=None):
|
||||||
|
if high is None:
|
||||||
|
if low is None:
|
||||||
|
high = 64 * 1024
|
||||||
|
else:
|
||||||
|
high = 4 * low
|
||||||
|
if low is None:
|
||||||
|
low = high // 4
|
||||||
|
|
||||||
|
if not high >= low >= 0:
|
||||||
|
raise ValueError(
|
||||||
|
f'high ({high!r}) must be >= low ({low!r}) must be >= 0')
|
||||||
|
|
||||||
|
self._high_water = high
|
||||||
|
self._low_water = low
|
||||||
|
|
||||||
|
def set_write_buffer_limits(self, high=None, low=None):
|
||||||
|
self._set_write_buffer_limits(high=high, low=low)
|
||||||
|
self._maybe_pause_protocol()
|
||||||
|
|
||||||
|
def get_write_buffer_size(self):
|
||||||
|
raise NotImplementedError
|
1141
Lib/asyncio/unix_events.py
Normal file
1141
Lib/asyncio/unix_events.py
Normal file
File diff suppressed because it is too large
Load diff
813
Lib/asyncio/windows_events.py
Normal file
813
Lib/asyncio/windows_events.py
Normal file
|
@ -0,0 +1,813 @@
|
||||||
|
"""Selector and proactor event loops for Windows."""
|
||||||
|
|
||||||
|
import _overlapped
|
||||||
|
import _winapi
|
||||||
|
import errno
|
||||||
|
import math
|
||||||
|
import msvcrt
|
||||||
|
import socket
|
||||||
|
import struct
|
||||||
|
import weakref
|
||||||
|
|
||||||
|
from . import events
|
||||||
|
from . import base_subprocess
|
||||||
|
from . import futures
|
||||||
|
from . import proactor_events
|
||||||
|
from . import selector_events
|
||||||
|
from . import tasks
|
||||||
|
from . import windows_utils
|
||||||
|
from .log import logger
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = (
|
||||||
|
'SelectorEventLoop', 'ProactorEventLoop', 'IocpProactor',
|
||||||
|
'DefaultEventLoopPolicy', 'WindowsSelectorEventLoopPolicy',
|
||||||
|
'WindowsProactorEventLoopPolicy',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
NULL = 0
|
||||||
|
INFINITE = 0xffffffff
|
||||||
|
ERROR_CONNECTION_REFUSED = 1225
|
||||||
|
ERROR_CONNECTION_ABORTED = 1236
|
||||||
|
|
||||||
|
# Initial delay in seconds for connect_pipe() before retrying to connect
|
||||||
|
CONNECT_PIPE_INIT_DELAY = 0.001
|
||||||
|
|
||||||
|
# Maximum delay in seconds for connect_pipe() before retrying to connect
|
||||||
|
CONNECT_PIPE_MAX_DELAY = 0.100
|
||||||
|
|
||||||
|
|
||||||
|
class _OverlappedFuture(futures.Future):
|
||||||
|
"""Subclass of Future which represents an overlapped operation.
|
||||||
|
|
||||||
|
Cancelling it will immediately cancel the overlapped operation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, ov, *, loop=None):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
if self._source_traceback:
|
||||||
|
del self._source_traceback[-1]
|
||||||
|
self._ov = ov
|
||||||
|
|
||||||
|
def _repr_info(self):
|
||||||
|
info = super()._repr_info()
|
||||||
|
if self._ov is not None:
|
||||||
|
state = 'pending' if self._ov.pending else 'completed'
|
||||||
|
info.insert(1, f'overlapped=<{state}, {self._ov.address:#x}>')
|
||||||
|
return info
|
||||||
|
|
||||||
|
def _cancel_overlapped(self):
|
||||||
|
if self._ov is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self._ov.cancel()
|
||||||
|
except OSError as exc:
|
||||||
|
context = {
|
||||||
|
'message': 'Cancelling an overlapped future failed',
|
||||||
|
'exception': exc,
|
||||||
|
'future': self,
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
self._ov = None
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
self._cancel_overlapped()
|
||||||
|
return super().cancel()
|
||||||
|
|
||||||
|
def set_exception(self, exception):
|
||||||
|
super().set_exception(exception)
|
||||||
|
self._cancel_overlapped()
|
||||||
|
|
||||||
|
def set_result(self, result):
|
||||||
|
super().set_result(result)
|
||||||
|
self._ov = None
|
||||||
|
|
||||||
|
|
||||||
|
class _BaseWaitHandleFuture(futures.Future):
|
||||||
|
"""Subclass of Future which represents a wait handle."""
|
||||||
|
|
||||||
|
def __init__(self, ov, handle, wait_handle, *, loop=None):
|
||||||
|
super().__init__(loop=loop)
|
||||||
|
if self._source_traceback:
|
||||||
|
del self._source_traceback[-1]
|
||||||
|
# Keep a reference to the Overlapped object to keep it alive until the
|
||||||
|
# wait is unregistered
|
||||||
|
self._ov = ov
|
||||||
|
self._handle = handle
|
||||||
|
self._wait_handle = wait_handle
|
||||||
|
|
||||||
|
# Should we call UnregisterWaitEx() if the wait completes
|
||||||
|
# or is cancelled?
|
||||||
|
self._registered = True
|
||||||
|
|
||||||
|
def _poll(self):
|
||||||
|
# non-blocking wait: use a timeout of 0 millisecond
|
||||||
|
return (_winapi.WaitForSingleObject(self._handle, 0) ==
|
||||||
|
_winapi.WAIT_OBJECT_0)
|
||||||
|
|
||||||
|
def _repr_info(self):
|
||||||
|
info = super()._repr_info()
|
||||||
|
info.append(f'handle={self._handle:#x}')
|
||||||
|
if self._handle is not None:
|
||||||
|
state = 'signaled' if self._poll() else 'waiting'
|
||||||
|
info.append(state)
|
||||||
|
if self._wait_handle is not None:
|
||||||
|
info.append(f'wait_handle={self._wait_handle:#x}')
|
||||||
|
return info
|
||||||
|
|
||||||
|
def _unregister_wait_cb(self, fut):
|
||||||
|
# The wait was unregistered: it's not safe to destroy the Overlapped
|
||||||
|
# object
|
||||||
|
self._ov = None
|
||||||
|
|
||||||
|
def _unregister_wait(self):
|
||||||
|
if not self._registered:
|
||||||
|
return
|
||||||
|
self._registered = False
|
||||||
|
|
||||||
|
wait_handle = self._wait_handle
|
||||||
|
self._wait_handle = None
|
||||||
|
try:
|
||||||
|
_overlapped.UnregisterWait(wait_handle)
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror != _overlapped.ERROR_IO_PENDING:
|
||||||
|
context = {
|
||||||
|
'message': 'Failed to unregister the wait handle',
|
||||||
|
'exception': exc,
|
||||||
|
'future': self,
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
return
|
||||||
|
# ERROR_IO_PENDING means that the unregister is pending
|
||||||
|
|
||||||
|
self._unregister_wait_cb(None)
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
self._unregister_wait()
|
||||||
|
return super().cancel()
|
||||||
|
|
||||||
|
def set_exception(self, exception):
|
||||||
|
self._unregister_wait()
|
||||||
|
super().set_exception(exception)
|
||||||
|
|
||||||
|
def set_result(self, result):
|
||||||
|
self._unregister_wait()
|
||||||
|
super().set_result(result)
|
||||||
|
|
||||||
|
|
||||||
|
class _WaitCancelFuture(_BaseWaitHandleFuture):
|
||||||
|
"""Subclass of Future which represents a wait for the cancellation of a
|
||||||
|
_WaitHandleFuture using an event.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, ov, event, wait_handle, *, loop=None):
|
||||||
|
super().__init__(ov, event, wait_handle, loop=loop)
|
||||||
|
|
||||||
|
self._done_callback = None
|
||||||
|
|
||||||
|
def cancel(self):
|
||||||
|
raise RuntimeError("_WaitCancelFuture must not be cancelled")
|
||||||
|
|
||||||
|
def set_result(self, result):
|
||||||
|
super().set_result(result)
|
||||||
|
if self._done_callback is not None:
|
||||||
|
self._done_callback(self)
|
||||||
|
|
||||||
|
def set_exception(self, exception):
|
||||||
|
super().set_exception(exception)
|
||||||
|
if self._done_callback is not None:
|
||||||
|
self._done_callback(self)
|
||||||
|
|
||||||
|
|
||||||
|
class _WaitHandleFuture(_BaseWaitHandleFuture):
|
||||||
|
def __init__(self, ov, handle, wait_handle, proactor, *, loop=None):
|
||||||
|
super().__init__(ov, handle, wait_handle, loop=loop)
|
||||||
|
self._proactor = proactor
|
||||||
|
self._unregister_proactor = True
|
||||||
|
self._event = _overlapped.CreateEvent(None, True, False, None)
|
||||||
|
self._event_fut = None
|
||||||
|
|
||||||
|
def _unregister_wait_cb(self, fut):
|
||||||
|
if self._event is not None:
|
||||||
|
_winapi.CloseHandle(self._event)
|
||||||
|
self._event = None
|
||||||
|
self._event_fut = None
|
||||||
|
|
||||||
|
# If the wait was cancelled, the wait may never be signalled, so
|
||||||
|
# it's required to unregister it. Otherwise, IocpProactor.close() will
|
||||||
|
# wait forever for an event which will never come.
|
||||||
|
#
|
||||||
|
# If the IocpProactor already received the event, it's safe to call
|
||||||
|
# _unregister() because we kept a reference to the Overlapped object
|
||||||
|
# which is used as a unique key.
|
||||||
|
self._proactor._unregister(self._ov)
|
||||||
|
self._proactor = None
|
||||||
|
|
||||||
|
super()._unregister_wait_cb(fut)
|
||||||
|
|
||||||
|
def _unregister_wait(self):
|
||||||
|
if not self._registered:
|
||||||
|
return
|
||||||
|
self._registered = False
|
||||||
|
|
||||||
|
wait_handle = self._wait_handle
|
||||||
|
self._wait_handle = None
|
||||||
|
try:
|
||||||
|
_overlapped.UnregisterWaitEx(wait_handle, self._event)
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror != _overlapped.ERROR_IO_PENDING:
|
||||||
|
context = {
|
||||||
|
'message': 'Failed to unregister the wait handle',
|
||||||
|
'exception': exc,
|
||||||
|
'future': self,
|
||||||
|
}
|
||||||
|
if self._source_traceback:
|
||||||
|
context['source_traceback'] = self._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
return
|
||||||
|
# ERROR_IO_PENDING is not an error, the wait was unregistered
|
||||||
|
|
||||||
|
self._event_fut = self._proactor._wait_cancel(self._event,
|
||||||
|
self._unregister_wait_cb)
|
||||||
|
|
||||||
|
|
||||||
|
class PipeServer(object):
|
||||||
|
"""Class representing a pipe server.
|
||||||
|
|
||||||
|
This is much like a bound, listening socket.
|
||||||
|
"""
|
||||||
|
def __init__(self, address):
|
||||||
|
self._address = address
|
||||||
|
self._free_instances = weakref.WeakSet()
|
||||||
|
# initialize the pipe attribute before calling _server_pipe_handle()
|
||||||
|
# because this function can raise an exception and the destructor calls
|
||||||
|
# the close() method
|
||||||
|
self._pipe = None
|
||||||
|
self._accept_pipe_future = None
|
||||||
|
self._pipe = self._server_pipe_handle(True)
|
||||||
|
|
||||||
|
def _get_unconnected_pipe(self):
|
||||||
|
# Create new instance and return previous one. This ensures
|
||||||
|
# that (until the server is closed) there is always at least
|
||||||
|
# one pipe handle for address. Therefore if a client attempt
|
||||||
|
# to connect it will not fail with FileNotFoundError.
|
||||||
|
tmp, self._pipe = self._pipe, self._server_pipe_handle(False)
|
||||||
|
return tmp
|
||||||
|
|
||||||
|
def _server_pipe_handle(self, first):
|
||||||
|
# Return a wrapper for a new pipe handle.
|
||||||
|
if self.closed():
|
||||||
|
return None
|
||||||
|
flags = _winapi.PIPE_ACCESS_DUPLEX | _winapi.FILE_FLAG_OVERLAPPED
|
||||||
|
if first:
|
||||||
|
flags |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
|
||||||
|
h = _winapi.CreateNamedPipe(
|
||||||
|
self._address, flags,
|
||||||
|
_winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
|
||||||
|
_winapi.PIPE_WAIT,
|
||||||
|
_winapi.PIPE_UNLIMITED_INSTANCES,
|
||||||
|
windows_utils.BUFSIZE, windows_utils.BUFSIZE,
|
||||||
|
_winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL)
|
||||||
|
pipe = windows_utils.PipeHandle(h)
|
||||||
|
self._free_instances.add(pipe)
|
||||||
|
return pipe
|
||||||
|
|
||||||
|
def closed(self):
|
||||||
|
return (self._address is None)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self._accept_pipe_future is not None:
|
||||||
|
self._accept_pipe_future.cancel()
|
||||||
|
self._accept_pipe_future = None
|
||||||
|
# Close all instances which have not been connected to by a client.
|
||||||
|
if self._address is not None:
|
||||||
|
for pipe in self._free_instances:
|
||||||
|
pipe.close()
|
||||||
|
self._pipe = None
|
||||||
|
self._address = None
|
||||||
|
self._free_instances.clear()
|
||||||
|
|
||||||
|
__del__ = close
|
||||||
|
|
||||||
|
|
||||||
|
class _WindowsSelectorEventLoop(selector_events.BaseSelectorEventLoop):
|
||||||
|
"""Windows version of selector event loop."""
|
||||||
|
|
||||||
|
|
||||||
|
class ProactorEventLoop(proactor_events.BaseProactorEventLoop):
|
||||||
|
"""Windows version of proactor event loop using IOCP."""
|
||||||
|
|
||||||
|
def __init__(self, proactor=None):
|
||||||
|
if proactor is None:
|
||||||
|
proactor = IocpProactor()
|
||||||
|
super().__init__(proactor)
|
||||||
|
|
||||||
|
async def create_pipe_connection(self, protocol_factory, address):
|
||||||
|
f = self._proactor.connect_pipe(address)
|
||||||
|
pipe = await f
|
||||||
|
protocol = protocol_factory()
|
||||||
|
trans = self._make_duplex_pipe_transport(pipe, protocol,
|
||||||
|
extra={'addr': address})
|
||||||
|
return trans, protocol
|
||||||
|
|
||||||
|
async def start_serving_pipe(self, protocol_factory, address):
|
||||||
|
server = PipeServer(address)
|
||||||
|
|
||||||
|
def loop_accept_pipe(f=None):
|
||||||
|
pipe = None
|
||||||
|
try:
|
||||||
|
if f:
|
||||||
|
pipe = f.result()
|
||||||
|
server._free_instances.discard(pipe)
|
||||||
|
|
||||||
|
if server.closed():
|
||||||
|
# A client connected before the server was closed:
|
||||||
|
# drop the client (close the pipe) and exit
|
||||||
|
pipe.close()
|
||||||
|
return
|
||||||
|
|
||||||
|
protocol = protocol_factory()
|
||||||
|
self._make_duplex_pipe_transport(
|
||||||
|
pipe, protocol, extra={'addr': address})
|
||||||
|
|
||||||
|
pipe = server._get_unconnected_pipe()
|
||||||
|
if pipe is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
f = self._proactor.accept_pipe(pipe)
|
||||||
|
except OSError as exc:
|
||||||
|
if pipe and pipe.fileno() != -1:
|
||||||
|
self.call_exception_handler({
|
||||||
|
'message': 'Pipe accept failed',
|
||||||
|
'exception': exc,
|
||||||
|
'pipe': pipe,
|
||||||
|
})
|
||||||
|
pipe.close()
|
||||||
|
elif self._debug:
|
||||||
|
logger.warning("Accept pipe failed on pipe %r",
|
||||||
|
pipe, exc_info=True)
|
||||||
|
except futures.CancelledError:
|
||||||
|
if pipe:
|
||||||
|
pipe.close()
|
||||||
|
else:
|
||||||
|
server._accept_pipe_future = f
|
||||||
|
f.add_done_callback(loop_accept_pipe)
|
||||||
|
|
||||||
|
self.call_soon(loop_accept_pipe)
|
||||||
|
return [server]
|
||||||
|
|
||||||
|
async def _make_subprocess_transport(self, protocol, args, shell,
|
||||||
|
stdin, stdout, stderr, bufsize,
|
||||||
|
extra=None, **kwargs):
|
||||||
|
waiter = self.create_future()
|
||||||
|
transp = _WindowsSubprocessTransport(self, protocol, args, shell,
|
||||||
|
stdin, stdout, stderr, bufsize,
|
||||||
|
waiter=waiter, extra=extra,
|
||||||
|
**kwargs)
|
||||||
|
try:
|
||||||
|
await waiter
|
||||||
|
except Exception:
|
||||||
|
transp.close()
|
||||||
|
await transp._wait()
|
||||||
|
raise
|
||||||
|
|
||||||
|
return transp
|
||||||
|
|
||||||
|
|
||||||
|
class IocpProactor:
|
||||||
|
"""Proactor implementation using IOCP."""
|
||||||
|
|
||||||
|
def __init__(self, concurrency=0xffffffff):
|
||||||
|
self._loop = None
|
||||||
|
self._results = []
|
||||||
|
self._iocp = _overlapped.CreateIoCompletionPort(
|
||||||
|
_overlapped.INVALID_HANDLE_VALUE, NULL, 0, concurrency)
|
||||||
|
self._cache = {}
|
||||||
|
self._registered = weakref.WeakSet()
|
||||||
|
self._unregistered = []
|
||||||
|
self._stopped_serving = weakref.WeakSet()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return ('<%s overlapped#=%s result#=%s>'
|
||||||
|
% (self.__class__.__name__, len(self._cache),
|
||||||
|
len(self._results)))
|
||||||
|
|
||||||
|
def set_loop(self, loop):
|
||||||
|
self._loop = loop
|
||||||
|
|
||||||
|
def select(self, timeout=None):
|
||||||
|
if not self._results:
|
||||||
|
self._poll(timeout)
|
||||||
|
tmp = self._results
|
||||||
|
self._results = []
|
||||||
|
return tmp
|
||||||
|
|
||||||
|
def _result(self, value):
|
||||||
|
fut = self._loop.create_future()
|
||||||
|
fut.set_result(value)
|
||||||
|
return fut
|
||||||
|
|
||||||
|
def recv(self, conn, nbytes, flags=0):
|
||||||
|
self._register_with_iocp(conn)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
try:
|
||||||
|
if isinstance(conn, socket.socket):
|
||||||
|
ov.WSARecv(conn.fileno(), nbytes, flags)
|
||||||
|
else:
|
||||||
|
ov.ReadFile(conn.fileno(), nbytes)
|
||||||
|
except BrokenPipeError:
|
||||||
|
return self._result(b'')
|
||||||
|
|
||||||
|
def finish_recv(trans, key, ov):
|
||||||
|
try:
|
||||||
|
return ov.getresult()
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
|
||||||
|
_overlapped.ERROR_OPERATION_ABORTED):
|
||||||
|
raise ConnectionResetError(*exc.args)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
return self._register(ov, conn, finish_recv)
|
||||||
|
|
||||||
|
def recv_into(self, conn, buf, flags=0):
|
||||||
|
self._register_with_iocp(conn)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
try:
|
||||||
|
if isinstance(conn, socket.socket):
|
||||||
|
ov.WSARecvInto(conn.fileno(), buf, flags)
|
||||||
|
else:
|
||||||
|
ov.ReadFileInto(conn.fileno(), buf)
|
||||||
|
except BrokenPipeError:
|
||||||
|
return self._result(b'')
|
||||||
|
|
||||||
|
def finish_recv(trans, key, ov):
|
||||||
|
try:
|
||||||
|
return ov.getresult()
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
|
||||||
|
_overlapped.ERROR_OPERATION_ABORTED):
|
||||||
|
raise ConnectionResetError(*exc.args)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
return self._register(ov, conn, finish_recv)
|
||||||
|
|
||||||
|
def send(self, conn, buf, flags=0):
|
||||||
|
self._register_with_iocp(conn)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
if isinstance(conn, socket.socket):
|
||||||
|
ov.WSASend(conn.fileno(), buf, flags)
|
||||||
|
else:
|
||||||
|
ov.WriteFile(conn.fileno(), buf)
|
||||||
|
|
||||||
|
def finish_send(trans, key, ov):
|
||||||
|
try:
|
||||||
|
return ov.getresult()
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
|
||||||
|
_overlapped.ERROR_OPERATION_ABORTED):
|
||||||
|
raise ConnectionResetError(*exc.args)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
return self._register(ov, conn, finish_send)
|
||||||
|
|
||||||
|
def accept(self, listener):
|
||||||
|
self._register_with_iocp(listener)
|
||||||
|
conn = self._get_accept_socket(listener.family)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
ov.AcceptEx(listener.fileno(), conn.fileno())
|
||||||
|
|
||||||
|
def finish_accept(trans, key, ov):
|
||||||
|
ov.getresult()
|
||||||
|
# Use SO_UPDATE_ACCEPT_CONTEXT so getsockname() etc work.
|
||||||
|
buf = struct.pack('@P', listener.fileno())
|
||||||
|
conn.setsockopt(socket.SOL_SOCKET,
|
||||||
|
_overlapped.SO_UPDATE_ACCEPT_CONTEXT, buf)
|
||||||
|
conn.settimeout(listener.gettimeout())
|
||||||
|
return conn, conn.getpeername()
|
||||||
|
|
||||||
|
async def accept_coro(future, conn):
|
||||||
|
# Coroutine closing the accept socket if the future is cancelled
|
||||||
|
try:
|
||||||
|
await future
|
||||||
|
except futures.CancelledError:
|
||||||
|
conn.close()
|
||||||
|
raise
|
||||||
|
|
||||||
|
future = self._register(ov, listener, finish_accept)
|
||||||
|
coro = accept_coro(future, conn)
|
||||||
|
tasks.ensure_future(coro, loop=self._loop)
|
||||||
|
return future
|
||||||
|
|
||||||
|
def connect(self, conn, address):
|
||||||
|
self._register_with_iocp(conn)
|
||||||
|
# The socket needs to be locally bound before we call ConnectEx().
|
||||||
|
try:
|
||||||
|
_overlapped.BindLocal(conn.fileno(), conn.family)
|
||||||
|
except OSError as e:
|
||||||
|
if e.winerror != errno.WSAEINVAL:
|
||||||
|
raise
|
||||||
|
# Probably already locally bound; check using getsockname().
|
||||||
|
if conn.getsockname()[1] == 0:
|
||||||
|
raise
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
ov.ConnectEx(conn.fileno(), address)
|
||||||
|
|
||||||
|
def finish_connect(trans, key, ov):
|
||||||
|
ov.getresult()
|
||||||
|
# Use SO_UPDATE_CONNECT_CONTEXT so getsockname() etc work.
|
||||||
|
conn.setsockopt(socket.SOL_SOCKET,
|
||||||
|
_overlapped.SO_UPDATE_CONNECT_CONTEXT, 0)
|
||||||
|
return conn
|
||||||
|
|
||||||
|
return self._register(ov, conn, finish_connect)
|
||||||
|
|
||||||
|
def sendfile(self, sock, file, offset, count):
|
||||||
|
self._register_with_iocp(sock)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
offset_low = offset & 0xffff_ffff
|
||||||
|
offset_high = (offset >> 32) & 0xffff_ffff
|
||||||
|
ov.TransmitFile(sock.fileno(),
|
||||||
|
msvcrt.get_osfhandle(file.fileno()),
|
||||||
|
offset_low, offset_high,
|
||||||
|
count, 0, 0)
|
||||||
|
|
||||||
|
def finish_sendfile(trans, key, ov):
|
||||||
|
try:
|
||||||
|
return ov.getresult()
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
|
||||||
|
_overlapped.ERROR_OPERATION_ABORTED):
|
||||||
|
raise ConnectionResetError(*exc.args)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
return self._register(ov, sock, finish_sendfile)
|
||||||
|
|
||||||
|
def accept_pipe(self, pipe):
|
||||||
|
self._register_with_iocp(pipe)
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
connected = ov.ConnectNamedPipe(pipe.fileno())
|
||||||
|
|
||||||
|
if connected:
|
||||||
|
# ConnectNamePipe() failed with ERROR_PIPE_CONNECTED which means
|
||||||
|
# that the pipe is connected. There is no need to wait for the
|
||||||
|
# completion of the connection.
|
||||||
|
return self._result(pipe)
|
||||||
|
|
||||||
|
def finish_accept_pipe(trans, key, ov):
|
||||||
|
ov.getresult()
|
||||||
|
return pipe
|
||||||
|
|
||||||
|
return self._register(ov, pipe, finish_accept_pipe)
|
||||||
|
|
||||||
|
async def connect_pipe(self, address):
|
||||||
|
delay = CONNECT_PIPE_INIT_DELAY
|
||||||
|
while True:
|
||||||
|
# Unfortunately there is no way to do an overlapped connect to
|
||||||
|
# a pipe. Call CreateFile() in a loop until it doesn't fail with
|
||||||
|
# ERROR_PIPE_BUSY.
|
||||||
|
try:
|
||||||
|
handle = _overlapped.ConnectPipe(address)
|
||||||
|
break
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.winerror != _overlapped.ERROR_PIPE_BUSY:
|
||||||
|
raise
|
||||||
|
|
||||||
|
# ConnectPipe() failed with ERROR_PIPE_BUSY: retry later
|
||||||
|
delay = min(delay * 2, CONNECT_PIPE_MAX_DELAY)
|
||||||
|
await tasks.sleep(delay, loop=self._loop)
|
||||||
|
|
||||||
|
return windows_utils.PipeHandle(handle)
|
||||||
|
|
||||||
|
def wait_for_handle(self, handle, timeout=None):
|
||||||
|
"""Wait for a handle.
|
||||||
|
|
||||||
|
Return a Future object. The result of the future is True if the wait
|
||||||
|
completed, or False if the wait did not complete (on timeout).
|
||||||
|
"""
|
||||||
|
return self._wait_for_handle(handle, timeout, False)
|
||||||
|
|
||||||
|
def _wait_cancel(self, event, done_callback):
|
||||||
|
fut = self._wait_for_handle(event, None, True)
|
||||||
|
# add_done_callback() cannot be used because the wait may only complete
|
||||||
|
# in IocpProactor.close(), while the event loop is not running.
|
||||||
|
fut._done_callback = done_callback
|
||||||
|
return fut
|
||||||
|
|
||||||
|
def _wait_for_handle(self, handle, timeout, _is_cancel):
|
||||||
|
if timeout is None:
|
||||||
|
ms = _winapi.INFINITE
|
||||||
|
else:
|
||||||
|
# RegisterWaitForSingleObject() has a resolution of 1 millisecond,
|
||||||
|
# round away from zero to wait *at least* timeout seconds.
|
||||||
|
ms = math.ceil(timeout * 1e3)
|
||||||
|
|
||||||
|
# We only create ov so we can use ov.address as a key for the cache.
|
||||||
|
ov = _overlapped.Overlapped(NULL)
|
||||||
|
wait_handle = _overlapped.RegisterWaitWithQueue(
|
||||||
|
handle, self._iocp, ov.address, ms)
|
||||||
|
if _is_cancel:
|
||||||
|
f = _WaitCancelFuture(ov, handle, wait_handle, loop=self._loop)
|
||||||
|
else:
|
||||||
|
f = _WaitHandleFuture(ov, handle, wait_handle, self,
|
||||||
|
loop=self._loop)
|
||||||
|
if f._source_traceback:
|
||||||
|
del f._source_traceback[-1]
|
||||||
|
|
||||||
|
def finish_wait_for_handle(trans, key, ov):
|
||||||
|
# Note that this second wait means that we should only use
|
||||||
|
# this with handles types where a successful wait has no
|
||||||
|
# effect. So events or processes are all right, but locks
|
||||||
|
# or semaphores are not. Also note if the handle is
|
||||||
|
# signalled and then quickly reset, then we may return
|
||||||
|
# False even though we have not timed out.
|
||||||
|
return f._poll()
|
||||||
|
|
||||||
|
self._cache[ov.address] = (f, ov, 0, finish_wait_for_handle)
|
||||||
|
return f
|
||||||
|
|
||||||
|
def _register_with_iocp(self, obj):
|
||||||
|
# To get notifications of finished ops on this objects sent to the
|
||||||
|
# completion port, were must register the handle.
|
||||||
|
if obj not in self._registered:
|
||||||
|
self._registered.add(obj)
|
||||||
|
_overlapped.CreateIoCompletionPort(obj.fileno(), self._iocp, 0, 0)
|
||||||
|
# XXX We could also use SetFileCompletionNotificationModes()
|
||||||
|
# to avoid sending notifications to completion port of ops
|
||||||
|
# that succeed immediately.
|
||||||
|
|
||||||
|
def _register(self, ov, obj, callback):
|
||||||
|
# Return a future which will be set with the result of the
|
||||||
|
# operation when it completes. The future's value is actually
|
||||||
|
# the value returned by callback().
|
||||||
|
f = _OverlappedFuture(ov, loop=self._loop)
|
||||||
|
if f._source_traceback:
|
||||||
|
del f._source_traceback[-1]
|
||||||
|
if not ov.pending:
|
||||||
|
# The operation has completed, so no need to postpone the
|
||||||
|
# work. We cannot take this short cut if we need the
|
||||||
|
# NumberOfBytes, CompletionKey values returned by
|
||||||
|
# PostQueuedCompletionStatus().
|
||||||
|
try:
|
||||||
|
value = callback(None, None, ov)
|
||||||
|
except OSError as e:
|
||||||
|
f.set_exception(e)
|
||||||
|
else:
|
||||||
|
f.set_result(value)
|
||||||
|
# Even if GetOverlappedResult() was called, we have to wait for the
|
||||||
|
# notification of the completion in GetQueuedCompletionStatus().
|
||||||
|
# Register the overlapped operation to keep a reference to the
|
||||||
|
# OVERLAPPED object, otherwise the memory is freed and Windows may
|
||||||
|
# read uninitialized memory.
|
||||||
|
|
||||||
|
# Register the overlapped operation for later. Note that
|
||||||
|
# we only store obj to prevent it from being garbage
|
||||||
|
# collected too early.
|
||||||
|
self._cache[ov.address] = (f, ov, obj, callback)
|
||||||
|
return f
|
||||||
|
|
||||||
|
def _unregister(self, ov):
|
||||||
|
"""Unregister an overlapped object.
|
||||||
|
|
||||||
|
Call this method when its future has been cancelled. The event can
|
||||||
|
already be signalled (pending in the proactor event queue). It is also
|
||||||
|
safe if the event is never signalled (because it was cancelled).
|
||||||
|
"""
|
||||||
|
self._unregistered.append(ov)
|
||||||
|
|
||||||
|
def _get_accept_socket(self, family):
|
||||||
|
s = socket.socket(family)
|
||||||
|
s.settimeout(0)
|
||||||
|
return s
|
||||||
|
|
||||||
|
def _poll(self, timeout=None):
|
||||||
|
if timeout is None:
|
||||||
|
ms = INFINITE
|
||||||
|
elif timeout < 0:
|
||||||
|
raise ValueError("negative timeout")
|
||||||
|
else:
|
||||||
|
# GetQueuedCompletionStatus() has a resolution of 1 millisecond,
|
||||||
|
# round away from zero to wait *at least* timeout seconds.
|
||||||
|
ms = math.ceil(timeout * 1e3)
|
||||||
|
if ms >= INFINITE:
|
||||||
|
raise ValueError("timeout too big")
|
||||||
|
|
||||||
|
while True:
|
||||||
|
status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms)
|
||||||
|
if status is None:
|
||||||
|
break
|
||||||
|
ms = 0
|
||||||
|
|
||||||
|
err, transferred, key, address = status
|
||||||
|
try:
|
||||||
|
f, ov, obj, callback = self._cache.pop(address)
|
||||||
|
except KeyError:
|
||||||
|
if self._loop.get_debug():
|
||||||
|
self._loop.call_exception_handler({
|
||||||
|
'message': ('GetQueuedCompletionStatus() returned an '
|
||||||
|
'unexpected event'),
|
||||||
|
'status': ('err=%s transferred=%s key=%#x address=%#x'
|
||||||
|
% (err, transferred, key, address)),
|
||||||
|
})
|
||||||
|
|
||||||
|
# key is either zero, or it is used to return a pipe
|
||||||
|
# handle which should be closed to avoid a leak.
|
||||||
|
if key not in (0, _overlapped.INVALID_HANDLE_VALUE):
|
||||||
|
_winapi.CloseHandle(key)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if obj in self._stopped_serving:
|
||||||
|
f.cancel()
|
||||||
|
# Don't call the callback if _register() already read the result or
|
||||||
|
# if the overlapped has been cancelled
|
||||||
|
elif not f.done():
|
||||||
|
try:
|
||||||
|
value = callback(transferred, key, ov)
|
||||||
|
except OSError as e:
|
||||||
|
f.set_exception(e)
|
||||||
|
self._results.append(f)
|
||||||
|
else:
|
||||||
|
f.set_result(value)
|
||||||
|
self._results.append(f)
|
||||||
|
|
||||||
|
# Remove unregistered futures
|
||||||
|
for ov in self._unregistered:
|
||||||
|
self._cache.pop(ov.address, None)
|
||||||
|
self._unregistered.clear()
|
||||||
|
|
||||||
|
def _stop_serving(self, obj):
|
||||||
|
# obj is a socket or pipe handle. It will be closed in
|
||||||
|
# BaseProactorEventLoop._stop_serving() which will make any
|
||||||
|
# pending operations fail quickly.
|
||||||
|
self._stopped_serving.add(obj)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
# Cancel remaining registered operations.
|
||||||
|
for address, (fut, ov, obj, callback) in list(self._cache.items()):
|
||||||
|
if fut.cancelled():
|
||||||
|
# Nothing to do with cancelled futures
|
||||||
|
pass
|
||||||
|
elif isinstance(fut, _WaitCancelFuture):
|
||||||
|
# _WaitCancelFuture must not be cancelled
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
fut.cancel()
|
||||||
|
except OSError as exc:
|
||||||
|
if self._loop is not None:
|
||||||
|
context = {
|
||||||
|
'message': 'Cancelling a future failed',
|
||||||
|
'exception': exc,
|
||||||
|
'future': fut,
|
||||||
|
}
|
||||||
|
if fut._source_traceback:
|
||||||
|
context['source_traceback'] = fut._source_traceback
|
||||||
|
self._loop.call_exception_handler(context)
|
||||||
|
|
||||||
|
while self._cache:
|
||||||
|
if not self._poll(1):
|
||||||
|
logger.debug('taking long time to close proactor')
|
||||||
|
|
||||||
|
self._results = []
|
||||||
|
if self._iocp is not None:
|
||||||
|
_winapi.CloseHandle(self._iocp)
|
||||||
|
self._iocp = None
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
class _WindowsSubprocessTransport(base_subprocess.BaseSubprocessTransport):
|
||||||
|
|
||||||
|
def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs):
|
||||||
|
self._proc = windows_utils.Popen(
|
||||||
|
args, shell=shell, stdin=stdin, stdout=stdout, stderr=stderr,
|
||||||
|
bufsize=bufsize, **kwargs)
|
||||||
|
|
||||||
|
def callback(f):
|
||||||
|
returncode = self._proc.poll()
|
||||||
|
self._process_exited(returncode)
|
||||||
|
|
||||||
|
f = self._loop._proactor.wait_for_handle(int(self._proc._handle))
|
||||||
|
f.add_done_callback(callback)
|
||||||
|
|
||||||
|
|
||||||
|
SelectorEventLoop = _WindowsSelectorEventLoop
|
||||||
|
|
||||||
|
|
||||||
|
class WindowsSelectorEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
|
||||||
|
_loop_factory = SelectorEventLoop
|
||||||
|
|
||||||
|
|
||||||
|
class WindowsProactorEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
|
||||||
|
_loop_factory = ProactorEventLoop
|
||||||
|
|
||||||
|
|
||||||
|
DefaultEventLoopPolicy = WindowsSelectorEventLoopPolicy
|
174
Lib/asyncio/windows_utils.py
Normal file
174
Lib/asyncio/windows_utils.py
Normal file
|
@ -0,0 +1,174 @@
|
||||||
|
"""Various Windows specific bits and pieces."""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if sys.platform != 'win32': # pragma: no cover
|
||||||
|
raise ImportError('win32 only')
|
||||||
|
|
||||||
|
import _winapi
|
||||||
|
import itertools
|
||||||
|
import msvcrt
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = 'pipe', 'Popen', 'PIPE', 'PipeHandle'
|
||||||
|
|
||||||
|
|
||||||
|
# Constants/globals
|
||||||
|
|
||||||
|
|
||||||
|
BUFSIZE = 8192
|
||||||
|
PIPE = subprocess.PIPE
|
||||||
|
STDOUT = subprocess.STDOUT
|
||||||
|
_mmap_counter = itertools.count()
|
||||||
|
|
||||||
|
|
||||||
|
# Replacement for os.pipe() using handles instead of fds
|
||||||
|
|
||||||
|
|
||||||
|
def pipe(*, duplex=False, overlapped=(True, True), bufsize=BUFSIZE):
|
||||||
|
"""Like os.pipe() but with overlapped support and using handles not fds."""
|
||||||
|
address = tempfile.mktemp(
|
||||||
|
prefix=r'\\.\pipe\python-pipe-{:d}-{:d}-'.format(
|
||||||
|
os.getpid(), next(_mmap_counter)))
|
||||||
|
|
||||||
|
if duplex:
|
||||||
|
openmode = _winapi.PIPE_ACCESS_DUPLEX
|
||||||
|
access = _winapi.GENERIC_READ | _winapi.GENERIC_WRITE
|
||||||
|
obsize, ibsize = bufsize, bufsize
|
||||||
|
else:
|
||||||
|
openmode = _winapi.PIPE_ACCESS_INBOUND
|
||||||
|
access = _winapi.GENERIC_WRITE
|
||||||
|
obsize, ibsize = 0, bufsize
|
||||||
|
|
||||||
|
openmode |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
|
||||||
|
|
||||||
|
if overlapped[0]:
|
||||||
|
openmode |= _winapi.FILE_FLAG_OVERLAPPED
|
||||||
|
|
||||||
|
if overlapped[1]:
|
||||||
|
flags_and_attribs = _winapi.FILE_FLAG_OVERLAPPED
|
||||||
|
else:
|
||||||
|
flags_and_attribs = 0
|
||||||
|
|
||||||
|
h1 = h2 = None
|
||||||
|
try:
|
||||||
|
h1 = _winapi.CreateNamedPipe(
|
||||||
|
address, openmode, _winapi.PIPE_WAIT,
|
||||||
|
1, obsize, ibsize, _winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL)
|
||||||
|
|
||||||
|
h2 = _winapi.CreateFile(
|
||||||
|
address, access, 0, _winapi.NULL, _winapi.OPEN_EXISTING,
|
||||||
|
flags_and_attribs, _winapi.NULL)
|
||||||
|
|
||||||
|
ov = _winapi.ConnectNamedPipe(h1, overlapped=True)
|
||||||
|
ov.GetOverlappedResult(True)
|
||||||
|
return h1, h2
|
||||||
|
except:
|
||||||
|
if h1 is not None:
|
||||||
|
_winapi.CloseHandle(h1)
|
||||||
|
if h2 is not None:
|
||||||
|
_winapi.CloseHandle(h2)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
# Wrapper for a pipe handle
|
||||||
|
|
||||||
|
|
||||||
|
class PipeHandle:
|
||||||
|
"""Wrapper for an overlapped pipe handle which is vaguely file-object like.
|
||||||
|
|
||||||
|
The IOCP event loop can use these instead of socket objects.
|
||||||
|
"""
|
||||||
|
def __init__(self, handle):
|
||||||
|
self._handle = handle
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self._handle is not None:
|
||||||
|
handle = f'handle={self._handle!r}'
|
||||||
|
else:
|
||||||
|
handle = 'closed'
|
||||||
|
return f'<{self.__class__.__name__} {handle}>'
|
||||||
|
|
||||||
|
@property
|
||||||
|
def handle(self):
|
||||||
|
return self._handle
|
||||||
|
|
||||||
|
def fileno(self):
|
||||||
|
if self._handle is None:
|
||||||
|
raise ValueError("I/O operation on closed pipe")
|
||||||
|
return self._handle
|
||||||
|
|
||||||
|
def close(self, *, CloseHandle=_winapi.CloseHandle):
|
||||||
|
if self._handle is not None:
|
||||||
|
CloseHandle(self._handle)
|
||||||
|
self._handle = None
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if self._handle is not None:
|
||||||
|
warnings.warn(f"unclosed {self!r}", ResourceWarning,
|
||||||
|
source=self)
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, t, v, tb):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
# Replacement for subprocess.Popen using overlapped pipe handles
|
||||||
|
|
||||||
|
|
||||||
|
class Popen(subprocess.Popen):
|
||||||
|
"""Replacement for subprocess.Popen using overlapped pipe handles.
|
||||||
|
|
||||||
|
The stdin, stdout, stderr are None or instances of PipeHandle.
|
||||||
|
"""
|
||||||
|
def __init__(self, args, stdin=None, stdout=None, stderr=None, **kwds):
|
||||||
|
assert not kwds.get('universal_newlines')
|
||||||
|
assert kwds.get('bufsize', 0) == 0
|
||||||
|
stdin_rfd = stdout_wfd = stderr_wfd = None
|
||||||
|
stdin_wh = stdout_rh = stderr_rh = None
|
||||||
|
if stdin == PIPE:
|
||||||
|
stdin_rh, stdin_wh = pipe(overlapped=(False, True), duplex=True)
|
||||||
|
stdin_rfd = msvcrt.open_osfhandle(stdin_rh, os.O_RDONLY)
|
||||||
|
else:
|
||||||
|
stdin_rfd = stdin
|
||||||
|
if stdout == PIPE:
|
||||||
|
stdout_rh, stdout_wh = pipe(overlapped=(True, False))
|
||||||
|
stdout_wfd = msvcrt.open_osfhandle(stdout_wh, 0)
|
||||||
|
else:
|
||||||
|
stdout_wfd = stdout
|
||||||
|
if stderr == PIPE:
|
||||||
|
stderr_rh, stderr_wh = pipe(overlapped=(True, False))
|
||||||
|
stderr_wfd = msvcrt.open_osfhandle(stderr_wh, 0)
|
||||||
|
elif stderr == STDOUT:
|
||||||
|
stderr_wfd = stdout_wfd
|
||||||
|
else:
|
||||||
|
stderr_wfd = stderr
|
||||||
|
try:
|
||||||
|
super().__init__(args, stdin=stdin_rfd, stdout=stdout_wfd,
|
||||||
|
stderr=stderr_wfd, **kwds)
|
||||||
|
except:
|
||||||
|
for h in (stdin_wh, stdout_rh, stderr_rh):
|
||||||
|
if h is not None:
|
||||||
|
_winapi.CloseHandle(h)
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
if stdin_wh is not None:
|
||||||
|
self.stdin = PipeHandle(stdin_wh)
|
||||||
|
if stdout_rh is not None:
|
||||||
|
self.stdout = PipeHandle(stdout_rh)
|
||||||
|
if stderr_rh is not None:
|
||||||
|
self.stderr = PipeHandle(stderr_rh)
|
||||||
|
finally:
|
||||||
|
if stdin == PIPE:
|
||||||
|
os.close(stdin_rfd)
|
||||||
|
if stdout == PIPE:
|
||||||
|
os.close(stdout_wfd)
|
||||||
|
if stderr == PIPE:
|
||||||
|
os.close(stderr_wfd)
|
644
Lib/asyncore.py
Normal file
644
Lib/asyncore.py
Normal file
|
@ -0,0 +1,644 @@
|
||||||
|
# -*- Mode: Python -*-
|
||||||
|
# Id: asyncore.py,v 2.51 2000/09/07 22:29:26 rushing Exp
|
||||||
|
# Author: Sam Rushing <rushing@nightmare.com>
|
||||||
|
|
||||||
|
# ======================================================================
|
||||||
|
# Copyright 1996 by Sam Rushing
|
||||||
|
#
|
||||||
|
# All Rights Reserved
|
||||||
|
#
|
||||||
|
# Permission to use, copy, modify, and distribute this software and
|
||||||
|
# its documentation for any purpose and without fee is hereby
|
||||||
|
# granted, provided that the above copyright notice appear in all
|
||||||
|
# copies and that both that copyright notice and this permission
|
||||||
|
# notice appear in supporting documentation, and that the name of Sam
|
||||||
|
# Rushing not be used in advertising or publicity pertaining to
|
||||||
|
# distribution of the software without specific, written prior
|
||||||
|
# permission.
|
||||||
|
#
|
||||||
|
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||||
|
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||||
|
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||||
|
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||||
|
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||||
|
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||||
|
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
# ======================================================================
|
||||||
|
|
||||||
|
"""Basic infrastructure for asynchronous socket service clients and servers.
|
||||||
|
|
||||||
|
There are only two ways to have a program on a single processor do "more
|
||||||
|
than one thing at a time". Multi-threaded programming is the simplest and
|
||||||
|
most popular way to do it, but there is another very different technique,
|
||||||
|
that lets you have nearly all the advantages of multi-threading, without
|
||||||
|
actually using multiple threads. it's really only practical if your program
|
||||||
|
is largely I/O bound. If your program is CPU bound, then pre-emptive
|
||||||
|
scheduled threads are probably what you really need. Network servers are
|
||||||
|
rarely CPU-bound, however.
|
||||||
|
|
||||||
|
If your operating system supports the select() system call in its I/O
|
||||||
|
library (and nearly all do), then you can use it to juggle multiple
|
||||||
|
communication channels at once; doing other work while your I/O is taking
|
||||||
|
place in the "background." Although this strategy can seem strange and
|
||||||
|
complex, especially at first, it is in many ways easier to understand and
|
||||||
|
control than multi-threaded programming. The module documented here solves
|
||||||
|
many of the difficult problems for you, making the task of building
|
||||||
|
sophisticated high-performance network servers and clients a snap.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import select
|
||||||
|
import socket
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
import os
|
||||||
|
from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \
|
||||||
|
ENOTCONN, ESHUTDOWN, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \
|
||||||
|
errorcode
|
||||||
|
|
||||||
|
_DISCONNECTED = frozenset({ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE,
|
||||||
|
EBADF})
|
||||||
|
|
||||||
|
try:
|
||||||
|
socket_map
|
||||||
|
except NameError:
|
||||||
|
socket_map = {}
|
||||||
|
|
||||||
|
def _strerror(err):
|
||||||
|
try:
|
||||||
|
return os.strerror(err)
|
||||||
|
except (ValueError, OverflowError, NameError):
|
||||||
|
if err in errorcode:
|
||||||
|
return errorcode[err]
|
||||||
|
return "Unknown error %s" %err
|
||||||
|
|
||||||
|
class ExitNow(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
_reraised_exceptions = (ExitNow, KeyboardInterrupt, SystemExit)
|
||||||
|
|
||||||
|
def read(obj):
|
||||||
|
try:
|
||||||
|
obj.handle_read_event()
|
||||||
|
except _reraised_exceptions:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
obj.handle_error()
|
||||||
|
|
||||||
|
def write(obj):
|
||||||
|
try:
|
||||||
|
obj.handle_write_event()
|
||||||
|
except _reraised_exceptions:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
obj.handle_error()
|
||||||
|
|
||||||
|
def _exception(obj):
|
||||||
|
try:
|
||||||
|
obj.handle_expt_event()
|
||||||
|
except _reraised_exceptions:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
obj.handle_error()
|
||||||
|
|
||||||
|
def readwrite(obj, flags):
|
||||||
|
try:
|
||||||
|
if flags & select.POLLIN:
|
||||||
|
obj.handle_read_event()
|
||||||
|
if flags & select.POLLOUT:
|
||||||
|
obj.handle_write_event()
|
||||||
|
if flags & select.POLLPRI:
|
||||||
|
obj.handle_expt_event()
|
||||||
|
if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL):
|
||||||
|
obj.handle_close()
|
||||||
|
except OSError as e:
|
||||||
|
if e.args[0] not in _DISCONNECTED:
|
||||||
|
obj.handle_error()
|
||||||
|
else:
|
||||||
|
obj.handle_close()
|
||||||
|
except _reraised_exceptions:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
obj.handle_error()
|
||||||
|
|
||||||
|
def poll(timeout=0.0, map=None):
|
||||||
|
if map is None:
|
||||||
|
map = socket_map
|
||||||
|
if map:
|
||||||
|
r = []; w = []; e = []
|
||||||
|
for fd, obj in list(map.items()):
|
||||||
|
is_r = obj.readable()
|
||||||
|
is_w = obj.writable()
|
||||||
|
if is_r:
|
||||||
|
r.append(fd)
|
||||||
|
# accepting sockets should not be writable
|
||||||
|
if is_w and not obj.accepting:
|
||||||
|
w.append(fd)
|
||||||
|
if is_r or is_w:
|
||||||
|
e.append(fd)
|
||||||
|
if [] == r == w == e:
|
||||||
|
time.sleep(timeout)
|
||||||
|
return
|
||||||
|
|
||||||
|
r, w, e = select.select(r, w, e, timeout)
|
||||||
|
|
||||||
|
for fd in r:
|
||||||
|
obj = map.get(fd)
|
||||||
|
if obj is None:
|
||||||
|
continue
|
||||||
|
read(obj)
|
||||||
|
|
||||||
|
for fd in w:
|
||||||
|
obj = map.get(fd)
|
||||||
|
if obj is None:
|
||||||
|
continue
|
||||||
|
write(obj)
|
||||||
|
|
||||||
|
for fd in e:
|
||||||
|
obj = map.get(fd)
|
||||||
|
if obj is None:
|
||||||
|
continue
|
||||||
|
_exception(obj)
|
||||||
|
|
||||||
|
def poll2(timeout=0.0, map=None):
|
||||||
|
# Use the poll() support added to the select module in Python 2.0
|
||||||
|
if map is None:
|
||||||
|
map = socket_map
|
||||||
|
if timeout is not None:
|
||||||
|
# timeout is in milliseconds
|
||||||
|
timeout = int(timeout*1000)
|
||||||
|
pollster = select.poll()
|
||||||
|
if map:
|
||||||
|
for fd, obj in list(map.items()):
|
||||||
|
flags = 0
|
||||||
|
if obj.readable():
|
||||||
|
flags |= select.POLLIN | select.POLLPRI
|
||||||
|
# accepting sockets should not be writable
|
||||||
|
if obj.writable() and not obj.accepting:
|
||||||
|
flags |= select.POLLOUT
|
||||||
|
if flags:
|
||||||
|
pollster.register(fd, flags)
|
||||||
|
|
||||||
|
r = pollster.poll(timeout)
|
||||||
|
for fd, flags in r:
|
||||||
|
obj = map.get(fd)
|
||||||
|
if obj is None:
|
||||||
|
continue
|
||||||
|
readwrite(obj, flags)
|
||||||
|
|
||||||
|
poll3 = poll2 # Alias for backward compatibility
|
||||||
|
|
||||||
|
def loop(timeout=30.0, use_poll=False, map=None, count=None):
|
||||||
|
if map is None:
|
||||||
|
map = socket_map
|
||||||
|
|
||||||
|
if use_poll and hasattr(select, 'poll'):
|
||||||
|
poll_fun = poll2
|
||||||
|
else:
|
||||||
|
poll_fun = poll
|
||||||
|
|
||||||
|
if count is None:
|
||||||
|
while map:
|
||||||
|
poll_fun(timeout, map)
|
||||||
|
|
||||||
|
else:
|
||||||
|
while map and count > 0:
|
||||||
|
poll_fun(timeout, map)
|
||||||
|
count = count - 1
|
||||||
|
|
||||||
|
class dispatcher:
|
||||||
|
|
||||||
|
debug = False
|
||||||
|
connected = False
|
||||||
|
accepting = False
|
||||||
|
connecting = False
|
||||||
|
closing = False
|
||||||
|
addr = None
|
||||||
|
ignore_log_types = frozenset({'warning'})
|
||||||
|
|
||||||
|
def __init__(self, sock=None, map=None):
|
||||||
|
if map is None:
|
||||||
|
self._map = socket_map
|
||||||
|
else:
|
||||||
|
self._map = map
|
||||||
|
|
||||||
|
self._fileno = None
|
||||||
|
|
||||||
|
if sock:
|
||||||
|
# Set to nonblocking just to make sure for cases where we
|
||||||
|
# get a socket from a blocking source.
|
||||||
|
sock.setblocking(0)
|
||||||
|
self.set_socket(sock, map)
|
||||||
|
self.connected = True
|
||||||
|
# The constructor no longer requires that the socket
|
||||||
|
# passed be connected.
|
||||||
|
try:
|
||||||
|
self.addr = sock.getpeername()
|
||||||
|
except OSError as err:
|
||||||
|
if err.args[0] in (ENOTCONN, EINVAL):
|
||||||
|
# To handle the case where we got an unconnected
|
||||||
|
# socket.
|
||||||
|
self.connected = False
|
||||||
|
else:
|
||||||
|
# The socket is broken in some unknown way, alert
|
||||||
|
# the user and remove it from the map (to prevent
|
||||||
|
# polling of broken sockets).
|
||||||
|
self.del_channel(map)
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
self.socket = None
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
status = [self.__class__.__module__+"."+self.__class__.__qualname__]
|
||||||
|
if self.accepting and self.addr:
|
||||||
|
status.append('listening')
|
||||||
|
elif self.connected:
|
||||||
|
status.append('connected')
|
||||||
|
if self.addr is not None:
|
||||||
|
try:
|
||||||
|
status.append('%s:%d' % self.addr)
|
||||||
|
except TypeError:
|
||||||
|
status.append(repr(self.addr))
|
||||||
|
return '<%s at %#x>' % (' '.join(status), id(self))
|
||||||
|
|
||||||
|
__str__ = __repr__
|
||||||
|
|
||||||
|
def add_channel(self, map=None):
|
||||||
|
#self.log_info('adding channel %s' % self)
|
||||||
|
if map is None:
|
||||||
|
map = self._map
|
||||||
|
map[self._fileno] = self
|
||||||
|
|
||||||
|
def del_channel(self, map=None):
|
||||||
|
fd = self._fileno
|
||||||
|
if map is None:
|
||||||
|
map = self._map
|
||||||
|
if fd in map:
|
||||||
|
#self.log_info('closing channel %d:%s' % (fd, self))
|
||||||
|
del map[fd]
|
||||||
|
self._fileno = None
|
||||||
|
|
||||||
|
def create_socket(self, family=socket.AF_INET, type=socket.SOCK_STREAM):
|
||||||
|
self.family_and_type = family, type
|
||||||
|
sock = socket.socket(family, type)
|
||||||
|
sock.setblocking(0)
|
||||||
|
self.set_socket(sock)
|
||||||
|
|
||||||
|
def set_socket(self, sock, map=None):
|
||||||
|
self.socket = sock
|
||||||
|
self._fileno = sock.fileno()
|
||||||
|
self.add_channel(map)
|
||||||
|
|
||||||
|
def set_reuse_addr(self):
|
||||||
|
# try to re-use a server port if possible
|
||||||
|
try:
|
||||||
|
self.socket.setsockopt(
|
||||||
|
socket.SOL_SOCKET, socket.SO_REUSEADDR,
|
||||||
|
self.socket.getsockopt(socket.SOL_SOCKET,
|
||||||
|
socket.SO_REUSEADDR) | 1
|
||||||
|
)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# ==================================================
|
||||||
|
# predicates for select()
|
||||||
|
# these are used as filters for the lists of sockets
|
||||||
|
# to pass to select().
|
||||||
|
# ==================================================
|
||||||
|
|
||||||
|
def readable(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def writable(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
# ==================================================
|
||||||
|
# socket object methods.
|
||||||
|
# ==================================================
|
||||||
|
|
||||||
|
def listen(self, num):
|
||||||
|
self.accepting = True
|
||||||
|
if os.name == 'nt' and num > 5:
|
||||||
|
num = 5
|
||||||
|
return self.socket.listen(num)
|
||||||
|
|
||||||
|
def bind(self, addr):
|
||||||
|
self.addr = addr
|
||||||
|
return self.socket.bind(addr)
|
||||||
|
|
||||||
|
def connect(self, address):
|
||||||
|
self.connected = False
|
||||||
|
self.connecting = True
|
||||||
|
err = self.socket.connect_ex(address)
|
||||||
|
if err in (EINPROGRESS, EALREADY, EWOULDBLOCK) \
|
||||||
|
or err == EINVAL and os.name == 'nt':
|
||||||
|
self.addr = address
|
||||||
|
return
|
||||||
|
if err in (0, EISCONN):
|
||||||
|
self.addr = address
|
||||||
|
self.handle_connect_event()
|
||||||
|
else:
|
||||||
|
raise OSError(err, errorcode[err])
|
||||||
|
|
||||||
|
def accept(self):
|
||||||
|
# XXX can return either an address pair or None
|
||||||
|
try:
|
||||||
|
conn, addr = self.socket.accept()
|
||||||
|
except TypeError:
|
||||||
|
return None
|
||||||
|
except OSError as why:
|
||||||
|
if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN):
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
return conn, addr
|
||||||
|
|
||||||
|
def send(self, data):
|
||||||
|
try:
|
||||||
|
result = self.socket.send(data)
|
||||||
|
return result
|
||||||
|
except OSError as why:
|
||||||
|
if why.args[0] == EWOULDBLOCK:
|
||||||
|
return 0
|
||||||
|
elif why.args[0] in _DISCONNECTED:
|
||||||
|
self.handle_close()
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def recv(self, buffer_size):
|
||||||
|
try:
|
||||||
|
data = self.socket.recv(buffer_size)
|
||||||
|
if not data:
|
||||||
|
# a closed connection is indicated by signaling
|
||||||
|
# a read condition, and having recv() return 0.
|
||||||
|
self.handle_close()
|
||||||
|
return b''
|
||||||
|
else:
|
||||||
|
return data
|
||||||
|
except OSError as why:
|
||||||
|
# winsock sometimes raises ENOTCONN
|
||||||
|
if why.args[0] in _DISCONNECTED:
|
||||||
|
self.handle_close()
|
||||||
|
return b''
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.connected = False
|
||||||
|
self.accepting = False
|
||||||
|
self.connecting = False
|
||||||
|
self.del_channel()
|
||||||
|
if self.socket is not None:
|
||||||
|
try:
|
||||||
|
self.socket.close()
|
||||||
|
except OSError as why:
|
||||||
|
if why.args[0] not in (ENOTCONN, EBADF):
|
||||||
|
raise
|
||||||
|
|
||||||
|
# log and log_info may be overridden to provide more sophisticated
|
||||||
|
# logging and warning methods. In general, log is for 'hit' logging
|
||||||
|
# and 'log_info' is for informational, warning and error logging.
|
||||||
|
|
||||||
|
def log(self, message):
|
||||||
|
sys.stderr.write('log: %s\n' % str(message))
|
||||||
|
|
||||||
|
def log_info(self, message, type='info'):
|
||||||
|
if type not in self.ignore_log_types:
|
||||||
|
print('%s: %s' % (type, message))
|
||||||
|
|
||||||
|
def handle_read_event(self):
|
||||||
|
if self.accepting:
|
||||||
|
# accepting sockets are never connected, they "spawn" new
|
||||||
|
# sockets that are connected
|
||||||
|
self.handle_accept()
|
||||||
|
elif not self.connected:
|
||||||
|
if self.connecting:
|
||||||
|
self.handle_connect_event()
|
||||||
|
self.handle_read()
|
||||||
|
else:
|
||||||
|
self.handle_read()
|
||||||
|
|
||||||
|
def handle_connect_event(self):
|
||||||
|
err = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
|
||||||
|
if err != 0:
|
||||||
|
raise OSError(err, _strerror(err))
|
||||||
|
self.handle_connect()
|
||||||
|
self.connected = True
|
||||||
|
self.connecting = False
|
||||||
|
|
||||||
|
def handle_write_event(self):
|
||||||
|
if self.accepting:
|
||||||
|
# Accepting sockets shouldn't get a write event.
|
||||||
|
# We will pretend it didn't happen.
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self.connected:
|
||||||
|
if self.connecting:
|
||||||
|
self.handle_connect_event()
|
||||||
|
self.handle_write()
|
||||||
|
|
||||||
|
def handle_expt_event(self):
|
||||||
|
# handle_expt_event() is called if there might be an error on the
|
||||||
|
# socket, or if there is OOB data
|
||||||
|
# check for the error condition first
|
||||||
|
err = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
|
||||||
|
if err != 0:
|
||||||
|
# we can get here when select.select() says that there is an
|
||||||
|
# exceptional condition on the socket
|
||||||
|
# since there is an error, we'll go ahead and close the socket
|
||||||
|
# like we would in a subclassed handle_read() that received no
|
||||||
|
# data
|
||||||
|
self.handle_close()
|
||||||
|
else:
|
||||||
|
self.handle_expt()
|
||||||
|
|
||||||
|
def handle_error(self):
|
||||||
|
nil, t, v, tbinfo = compact_traceback()
|
||||||
|
|
||||||
|
# sometimes a user repr method will crash.
|
||||||
|
try:
|
||||||
|
self_repr = repr(self)
|
||||||
|
except:
|
||||||
|
self_repr = '<__repr__(self) failed for object at %0x>' % id(self)
|
||||||
|
|
||||||
|
self.log_info(
|
||||||
|
'uncaptured python exception, closing channel %s (%s:%s %s)' % (
|
||||||
|
self_repr,
|
||||||
|
t,
|
||||||
|
v,
|
||||||
|
tbinfo
|
||||||
|
),
|
||||||
|
'error'
|
||||||
|
)
|
||||||
|
self.handle_close()
|
||||||
|
|
||||||
|
def handle_expt(self):
|
||||||
|
self.log_info('unhandled incoming priority event', 'warning')
|
||||||
|
|
||||||
|
def handle_read(self):
|
||||||
|
self.log_info('unhandled read event', 'warning')
|
||||||
|
|
||||||
|
def handle_write(self):
|
||||||
|
self.log_info('unhandled write event', 'warning')
|
||||||
|
|
||||||
|
def handle_connect(self):
|
||||||
|
self.log_info('unhandled connect event', 'warning')
|
||||||
|
|
||||||
|
def handle_accept(self):
|
||||||
|
pair = self.accept()
|
||||||
|
if pair is not None:
|
||||||
|
self.handle_accepted(*pair)
|
||||||
|
|
||||||
|
def handle_accepted(self, sock, addr):
|
||||||
|
sock.close()
|
||||||
|
self.log_info('unhandled accepted event', 'warning')
|
||||||
|
|
||||||
|
def handle_close(self):
|
||||||
|
self.log_info('unhandled close event', 'warning')
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# adds simple buffered output capability, useful for simple clients.
|
||||||
|
# [for more sophisticated usage use asynchat.async_chat]
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class dispatcher_with_send(dispatcher):
|
||||||
|
|
||||||
|
def __init__(self, sock=None, map=None):
|
||||||
|
dispatcher.__init__(self, sock, map)
|
||||||
|
self.out_buffer = b''
|
||||||
|
|
||||||
|
def initiate_send(self):
|
||||||
|
num_sent = 0
|
||||||
|
num_sent = dispatcher.send(self, self.out_buffer[:65536])
|
||||||
|
self.out_buffer = self.out_buffer[num_sent:]
|
||||||
|
|
||||||
|
def handle_write(self):
|
||||||
|
self.initiate_send()
|
||||||
|
|
||||||
|
def writable(self):
|
||||||
|
return (not self.connected) or len(self.out_buffer)
|
||||||
|
|
||||||
|
def send(self, data):
|
||||||
|
if self.debug:
|
||||||
|
self.log_info('sending %s' % repr(data))
|
||||||
|
self.out_buffer = self.out_buffer + data
|
||||||
|
self.initiate_send()
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# used for debugging.
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def compact_traceback():
|
||||||
|
t, v, tb = sys.exc_info()
|
||||||
|
tbinfo = []
|
||||||
|
if not tb: # Must have a traceback
|
||||||
|
raise AssertionError("traceback does not exist")
|
||||||
|
while tb:
|
||||||
|
tbinfo.append((
|
||||||
|
tb.tb_frame.f_code.co_filename,
|
||||||
|
tb.tb_frame.f_code.co_name,
|
||||||
|
str(tb.tb_lineno)
|
||||||
|
))
|
||||||
|
tb = tb.tb_next
|
||||||
|
|
||||||
|
# just to be safe
|
||||||
|
del tb
|
||||||
|
|
||||||
|
file, function, line = tbinfo[-1]
|
||||||
|
info = ' '.join(['[%s|%s|%s]' % x for x in tbinfo])
|
||||||
|
return (file, function, line), t, v, info
|
||||||
|
|
||||||
|
def close_all(map=None, ignore_all=False):
|
||||||
|
if map is None:
|
||||||
|
map = socket_map
|
||||||
|
for x in list(map.values()):
|
||||||
|
try:
|
||||||
|
x.close()
|
||||||
|
except OSError as x:
|
||||||
|
if x.args[0] == EBADF:
|
||||||
|
pass
|
||||||
|
elif not ignore_all:
|
||||||
|
raise
|
||||||
|
except _reraised_exceptions:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
if not ignore_all:
|
||||||
|
raise
|
||||||
|
map.clear()
|
||||||
|
|
||||||
|
# Asynchronous File I/O:
|
||||||
|
#
|
||||||
|
# After a little research (reading man pages on various unixen, and
|
||||||
|
# digging through the linux kernel), I've determined that select()
|
||||||
|
# isn't meant for doing asynchronous file i/o.
|
||||||
|
# Heartening, though - reading linux/mm/filemap.c shows that linux
|
||||||
|
# supports asynchronous read-ahead. So _MOST_ of the time, the data
|
||||||
|
# will be sitting in memory for us already when we go to read it.
|
||||||
|
#
|
||||||
|
# What other OS's (besides NT) support async file i/o? [VMS?]
|
||||||
|
#
|
||||||
|
# Regardless, this is useful for pipes, and stdin/stdout...
|
||||||
|
|
||||||
|
if os.name == 'posix':
|
||||||
|
class file_wrapper:
|
||||||
|
# Here we override just enough to make a file
|
||||||
|
# look like a socket for the purposes of asyncore.
|
||||||
|
# The passed fd is automatically os.dup()'d
|
||||||
|
|
||||||
|
def __init__(self, fd):
|
||||||
|
self.fd = os.dup(fd)
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if self.fd >= 0:
|
||||||
|
warnings.warn("unclosed file %r" % self, ResourceWarning,
|
||||||
|
source=self)
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def recv(self, *args):
|
||||||
|
return os.read(self.fd, *args)
|
||||||
|
|
||||||
|
def send(self, *args):
|
||||||
|
return os.write(self.fd, *args)
|
||||||
|
|
||||||
|
def getsockopt(self, level, optname, buflen=None):
|
||||||
|
if (level == socket.SOL_SOCKET and
|
||||||
|
optname == socket.SO_ERROR and
|
||||||
|
not buflen):
|
||||||
|
return 0
|
||||||
|
raise NotImplementedError("Only asyncore specific behaviour "
|
||||||
|
"implemented.")
|
||||||
|
|
||||||
|
read = recv
|
||||||
|
write = send
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.fd < 0:
|
||||||
|
return
|
||||||
|
fd = self.fd
|
||||||
|
self.fd = -1
|
||||||
|
os.close(fd)
|
||||||
|
|
||||||
|
def fileno(self):
|
||||||
|
return self.fd
|
||||||
|
|
||||||
|
class file_dispatcher(dispatcher):
|
||||||
|
|
||||||
|
def __init__(self, fd, map=None):
|
||||||
|
dispatcher.__init__(self, None, map)
|
||||||
|
self.connected = True
|
||||||
|
try:
|
||||||
|
fd = fd.fileno()
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
self.set_file(fd)
|
||||||
|
# set it to non-blocking mode
|
||||||
|
os.set_blocking(fd, False)
|
||||||
|
|
||||||
|
def set_file(self, fd):
|
||||||
|
self.socket = file_wrapper(fd)
|
||||||
|
self._fileno = self.socket.fileno()
|
||||||
|
self.add_channel()
|
595
Lib/base64.py
Normal file
595
Lib/base64.py
Normal file
|
@ -0,0 +1,595 @@
|
||||||
|
#! /usr/bin/env python3
|
||||||
|
|
||||||
|
"""Base16, Base32, Base64 (RFC 3548), Base85 and Ascii85 data encodings"""
|
||||||
|
|
||||||
|
# Modified 04-Oct-1995 by Jack Jansen to use binascii module
|
||||||
|
# Modified 30-Dec-2003 by Barry Warsaw to add full RFC 3548 support
|
||||||
|
# Modified 22-May-2007 by Guido van Rossum to use bytes everywhere
|
||||||
|
|
||||||
|
import re
|
||||||
|
import struct
|
||||||
|
import binascii
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
# Legacy interface exports traditional RFC 2045 Base64 encodings
|
||||||
|
'encode', 'decode', 'encodebytes', 'decodebytes',
|
||||||
|
# Generalized interface for other encodings
|
||||||
|
'b64encode', 'b64decode', 'b32encode', 'b32decode',
|
||||||
|
'b16encode', 'b16decode',
|
||||||
|
# Base85 and Ascii85 encodings
|
||||||
|
'b85encode', 'b85decode', 'a85encode', 'a85decode',
|
||||||
|
# Standard Base64 encoding
|
||||||
|
'standard_b64encode', 'standard_b64decode',
|
||||||
|
# Some common Base64 alternatives. As referenced by RFC 3458, see thread
|
||||||
|
# starting at:
|
||||||
|
#
|
||||||
|
# http://zgp.org/pipermail/p2p-hackers/2001-September/000316.html
|
||||||
|
'urlsafe_b64encode', 'urlsafe_b64decode',
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
bytes_types = (bytes, bytearray) # Types acceptable as binary data
|
||||||
|
|
||||||
|
def _bytes_from_decode_data(s):
|
||||||
|
if isinstance(s, str):
|
||||||
|
try:
|
||||||
|
return s.encode('ascii')
|
||||||
|
except UnicodeEncodeError:
|
||||||
|
raise ValueError('string argument should contain only ASCII characters')
|
||||||
|
if isinstance(s, bytes_types):
|
||||||
|
return s
|
||||||
|
try:
|
||||||
|
return memoryview(s).tobytes()
|
||||||
|
except TypeError:
|
||||||
|
raise TypeError("argument should be a bytes-like object or ASCII "
|
||||||
|
"string, not %r" % s.__class__.__name__) from None
|
||||||
|
|
||||||
|
|
||||||
|
# Base64 encoding/decoding uses binascii
|
||||||
|
|
||||||
|
def b64encode(s, altchars=None):
|
||||||
|
"""Encode the bytes-like object s using Base64 and return a bytes object.
|
||||||
|
|
||||||
|
Optional altchars should be a byte string of length 2 which specifies an
|
||||||
|
alternative alphabet for the '+' and '/' characters. This allows an
|
||||||
|
application to e.g. generate url or filesystem safe Base64 strings.
|
||||||
|
"""
|
||||||
|
encoded = binascii.b2a_base64(s, newline=False)
|
||||||
|
if altchars is not None:
|
||||||
|
assert len(altchars) == 2, repr(altchars)
|
||||||
|
return encoded.translate(bytes.maketrans(b'+/', altchars))
|
||||||
|
return encoded
|
||||||
|
|
||||||
|
|
||||||
|
def b64decode(s, altchars=None, validate=False):
|
||||||
|
"""Decode the Base64 encoded bytes-like object or ASCII string s.
|
||||||
|
|
||||||
|
Optional altchars must be a bytes-like object or ASCII string of length 2
|
||||||
|
which specifies the alternative alphabet used instead of the '+' and '/'
|
||||||
|
characters.
|
||||||
|
|
||||||
|
The result is returned as a bytes object. A binascii.Error is raised if
|
||||||
|
s is incorrectly padded.
|
||||||
|
|
||||||
|
If validate is False (the default), characters that are neither in the
|
||||||
|
normal base-64 alphabet nor the alternative alphabet are discarded prior
|
||||||
|
to the padding check. If validate is True, these non-alphabet characters
|
||||||
|
in the input result in a binascii.Error.
|
||||||
|
"""
|
||||||
|
s = _bytes_from_decode_data(s)
|
||||||
|
if altchars is not None:
|
||||||
|
altchars = _bytes_from_decode_data(altchars)
|
||||||
|
assert len(altchars) == 2, repr(altchars)
|
||||||
|
s = s.translate(bytes.maketrans(altchars, b'+/'))
|
||||||
|
if validate and not re.match(b'^[A-Za-z0-9+/]*={0,2}$', s):
|
||||||
|
raise binascii.Error('Non-base64 digit found')
|
||||||
|
return binascii.a2b_base64(s)
|
||||||
|
|
||||||
|
|
||||||
|
def standard_b64encode(s):
|
||||||
|
"""Encode bytes-like object s using the standard Base64 alphabet.
|
||||||
|
|
||||||
|
The result is returned as a bytes object.
|
||||||
|
"""
|
||||||
|
return b64encode(s)
|
||||||
|
|
||||||
|
def standard_b64decode(s):
|
||||||
|
"""Decode bytes encoded with the standard Base64 alphabet.
|
||||||
|
|
||||||
|
Argument s is a bytes-like object or ASCII string to decode. The result
|
||||||
|
is returned as a bytes object. A binascii.Error is raised if the input
|
||||||
|
is incorrectly padded. Characters that are not in the standard alphabet
|
||||||
|
are discarded prior to the padding check.
|
||||||
|
"""
|
||||||
|
return b64decode(s)
|
||||||
|
|
||||||
|
|
||||||
|
_urlsafe_encode_translation = bytes.maketrans(b'+/', b'-_')
|
||||||
|
_urlsafe_decode_translation = bytes.maketrans(b'-_', b'+/')
|
||||||
|
|
||||||
|
def urlsafe_b64encode(s):
|
||||||
|
"""Encode bytes using the URL- and filesystem-safe Base64 alphabet.
|
||||||
|
|
||||||
|
Argument s is a bytes-like object to encode. The result is returned as a
|
||||||
|
bytes object. The alphabet uses '-' instead of '+' and '_' instead of
|
||||||
|
'/'.
|
||||||
|
"""
|
||||||
|
return b64encode(s).translate(_urlsafe_encode_translation)
|
||||||
|
|
||||||
|
def urlsafe_b64decode(s):
|
||||||
|
"""Decode bytes using the URL- and filesystem-safe Base64 alphabet.
|
||||||
|
|
||||||
|
Argument s is a bytes-like object or ASCII string to decode. The result
|
||||||
|
is returned as a bytes object. A binascii.Error is raised if the input
|
||||||
|
is incorrectly padded. Characters that are not in the URL-safe base-64
|
||||||
|
alphabet, and are not a plus '+' or slash '/', are discarded prior to the
|
||||||
|
padding check.
|
||||||
|
|
||||||
|
The alphabet uses '-' instead of '+' and '_' instead of '/'.
|
||||||
|
"""
|
||||||
|
s = _bytes_from_decode_data(s)
|
||||||
|
s = s.translate(_urlsafe_decode_translation)
|
||||||
|
return b64decode(s)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Base32 encoding/decoding must be done in Python
|
||||||
|
_b32alphabet = b'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567'
|
||||||
|
_b32tab2 = None
|
||||||
|
_b32rev = None
|
||||||
|
|
||||||
|
def b32encode(s):
|
||||||
|
"""Encode the bytes-like object s using Base32 and return a bytes object.
|
||||||
|
"""
|
||||||
|
global _b32tab2
|
||||||
|
# Delay the initialization of the table to not waste memory
|
||||||
|
# if the function is never called
|
||||||
|
if _b32tab2 is None:
|
||||||
|
b32tab = [bytes((i,)) for i in _b32alphabet]
|
||||||
|
_b32tab2 = [a + b for a in b32tab for b in b32tab]
|
||||||
|
b32tab = None
|
||||||
|
|
||||||
|
if not isinstance(s, bytes_types):
|
||||||
|
s = memoryview(s).tobytes()
|
||||||
|
leftover = len(s) % 5
|
||||||
|
# Pad the last quantum with zero bits if necessary
|
||||||
|
if leftover:
|
||||||
|
s = s + b'\0' * (5 - leftover) # Don't use += !
|
||||||
|
encoded = bytearray()
|
||||||
|
from_bytes = int.from_bytes
|
||||||
|
b32tab2 = _b32tab2
|
||||||
|
for i in range(0, len(s), 5):
|
||||||
|
c = from_bytes(s[i: i + 5], 'big')
|
||||||
|
encoded += (b32tab2[c >> 30] + # bits 1 - 10
|
||||||
|
b32tab2[(c >> 20) & 0x3ff] + # bits 11 - 20
|
||||||
|
b32tab2[(c >> 10) & 0x3ff] + # bits 21 - 30
|
||||||
|
b32tab2[c & 0x3ff] # bits 31 - 40
|
||||||
|
)
|
||||||
|
# Adjust for any leftover partial quanta
|
||||||
|
if leftover == 1:
|
||||||
|
encoded[-6:] = b'======'
|
||||||
|
elif leftover == 2:
|
||||||
|
encoded[-4:] = b'===='
|
||||||
|
elif leftover == 3:
|
||||||
|
encoded[-3:] = b'==='
|
||||||
|
elif leftover == 4:
|
||||||
|
encoded[-1:] = b'='
|
||||||
|
return bytes(encoded)
|
||||||
|
|
||||||
|
def b32decode(s, casefold=False, map01=None):
|
||||||
|
"""Decode the Base32 encoded bytes-like object or ASCII string s.
|
||||||
|
|
||||||
|
Optional casefold is a flag specifying whether a lowercase alphabet is
|
||||||
|
acceptable as input. For security purposes, the default is False.
|
||||||
|
|
||||||
|
RFC 3548 allows for optional mapping of the digit 0 (zero) to the
|
||||||
|
letter O (oh), and for optional mapping of the digit 1 (one) to
|
||||||
|
either the letter I (eye) or letter L (el). The optional argument
|
||||||
|
map01 when not None, specifies which letter the digit 1 should be
|
||||||
|
mapped to (when map01 is not None, the digit 0 is always mapped to
|
||||||
|
the letter O). For security purposes the default is None, so that
|
||||||
|
0 and 1 are not allowed in the input.
|
||||||
|
|
||||||
|
The result is returned as a bytes object. A binascii.Error is raised if
|
||||||
|
the input is incorrectly padded or if there are non-alphabet
|
||||||
|
characters present in the input.
|
||||||
|
"""
|
||||||
|
global _b32rev
|
||||||
|
# Delay the initialization of the table to not waste memory
|
||||||
|
# if the function is never called
|
||||||
|
if _b32rev is None:
|
||||||
|
_b32rev = {v: k for k, v in enumerate(_b32alphabet)}
|
||||||
|
s = _bytes_from_decode_data(s)
|
||||||
|
if len(s) % 8:
|
||||||
|
raise binascii.Error('Incorrect padding')
|
||||||
|
# Handle section 2.4 zero and one mapping. The flag map01 will be either
|
||||||
|
# False, or the character to map the digit 1 (one) to. It should be
|
||||||
|
# either L (el) or I (eye).
|
||||||
|
if map01 is not None:
|
||||||
|
map01 = _bytes_from_decode_data(map01)
|
||||||
|
assert len(map01) == 1, repr(map01)
|
||||||
|
s = s.translate(bytes.maketrans(b'01', b'O' + map01))
|
||||||
|
if casefold:
|
||||||
|
s = s.upper()
|
||||||
|
# Strip off pad characters from the right. We need to count the pad
|
||||||
|
# characters because this will tell us how many null bytes to remove from
|
||||||
|
# the end of the decoded string.
|
||||||
|
l = len(s)
|
||||||
|
s = s.rstrip(b'=')
|
||||||
|
padchars = l - len(s)
|
||||||
|
# Now decode the full quanta
|
||||||
|
decoded = bytearray()
|
||||||
|
b32rev = _b32rev
|
||||||
|
for i in range(0, len(s), 8):
|
||||||
|
quanta = s[i: i + 8]
|
||||||
|
acc = 0
|
||||||
|
try:
|
||||||
|
for c in quanta:
|
||||||
|
acc = (acc << 5) + b32rev[c]
|
||||||
|
except KeyError:
|
||||||
|
raise binascii.Error('Non-base32 digit found') from None
|
||||||
|
decoded += acc.to_bytes(5, 'big')
|
||||||
|
# Process the last, partial quanta
|
||||||
|
if l % 8 or padchars not in {0, 1, 3, 4, 6}:
|
||||||
|
raise binascii.Error('Incorrect padding')
|
||||||
|
if padchars and decoded:
|
||||||
|
acc <<= 5 * padchars
|
||||||
|
last = acc.to_bytes(5, 'big')
|
||||||
|
leftover = (43 - 5 * padchars) // 8 # 1: 4, 3: 3, 4: 2, 6: 1
|
||||||
|
decoded[-5:] = last[:leftover]
|
||||||
|
return bytes(decoded)
|
||||||
|
|
||||||
|
|
||||||
|
# RFC 3548, Base 16 Alphabet specifies uppercase, but hexlify() returns
|
||||||
|
# lowercase. The RFC also recommends against accepting input case
|
||||||
|
# insensitively.
|
||||||
|
def b16encode(s):
|
||||||
|
"""Encode the bytes-like object s using Base16 and return a bytes object.
|
||||||
|
"""
|
||||||
|
return binascii.hexlify(s).upper()
|
||||||
|
|
||||||
|
|
||||||
|
def b16decode(s, casefold=False):
|
||||||
|
"""Decode the Base16 encoded bytes-like object or ASCII string s.
|
||||||
|
|
||||||
|
Optional casefold is a flag specifying whether a lowercase alphabet is
|
||||||
|
acceptable as input. For security purposes, the default is False.
|
||||||
|
|
||||||
|
The result is returned as a bytes object. A binascii.Error is raised if
|
||||||
|
s is incorrectly padded or if there are non-alphabet characters present
|
||||||
|
in the input.
|
||||||
|
"""
|
||||||
|
s = _bytes_from_decode_data(s)
|
||||||
|
if casefold:
|
||||||
|
s = s.upper()
|
||||||
|
if re.search(b'[^0-9A-F]', s):
|
||||||
|
raise binascii.Error('Non-base16 digit found')
|
||||||
|
return binascii.unhexlify(s)
|
||||||
|
|
||||||
|
#
|
||||||
|
# Ascii85 encoding/decoding
|
||||||
|
#
|
||||||
|
|
||||||
|
_a85chars = None
|
||||||
|
_a85chars2 = None
|
||||||
|
_A85START = b"<~"
|
||||||
|
_A85END = b"~>"
|
||||||
|
|
||||||
|
def _85encode(b, chars, chars2, pad=False, foldnuls=False, foldspaces=False):
|
||||||
|
# Helper function for a85encode and b85encode
|
||||||
|
if not isinstance(b, bytes_types):
|
||||||
|
b = memoryview(b).tobytes()
|
||||||
|
|
||||||
|
padding = (-len(b)) % 4
|
||||||
|
if padding:
|
||||||
|
b = b + b'\0' * padding
|
||||||
|
words = struct.Struct('!%dI' % (len(b) // 4)).unpack(b)
|
||||||
|
|
||||||
|
chunks = [b'z' if foldnuls and not word else
|
||||||
|
b'y' if foldspaces and word == 0x20202020 else
|
||||||
|
(chars2[word // 614125] +
|
||||||
|
chars2[word // 85 % 7225] +
|
||||||
|
chars[word % 85])
|
||||||
|
for word in words]
|
||||||
|
|
||||||
|
if padding and not pad:
|
||||||
|
if chunks[-1] == b'z':
|
||||||
|
chunks[-1] = chars[0] * 5
|
||||||
|
chunks[-1] = chunks[-1][:-padding]
|
||||||
|
|
||||||
|
return b''.join(chunks)
|
||||||
|
|
||||||
|
def a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False):
|
||||||
|
"""Encode bytes-like object b using Ascii85 and return a bytes object.
|
||||||
|
|
||||||
|
foldspaces is an optional flag that uses the special short sequence 'y'
|
||||||
|
instead of 4 consecutive spaces (ASCII 0x20) as supported by 'btoa'. This
|
||||||
|
feature is not supported by the "standard" Adobe encoding.
|
||||||
|
|
||||||
|
wrapcol controls whether the output should have newline (b'\\n') characters
|
||||||
|
added to it. If this is non-zero, each output line will be at most this
|
||||||
|
many characters long.
|
||||||
|
|
||||||
|
pad controls whether the input is padded to a multiple of 4 before
|
||||||
|
encoding. Note that the btoa implementation always pads.
|
||||||
|
|
||||||
|
adobe controls whether the encoded byte sequence is framed with <~ and ~>,
|
||||||
|
which is used by the Adobe implementation.
|
||||||
|
"""
|
||||||
|
global _a85chars, _a85chars2
|
||||||
|
# Delay the initialization of tables to not waste memory
|
||||||
|
# if the function is never called
|
||||||
|
if _a85chars is None:
|
||||||
|
_a85chars = [bytes((i,)) for i in range(33, 118)]
|
||||||
|
_a85chars2 = [(a + b) for a in _a85chars for b in _a85chars]
|
||||||
|
|
||||||
|
result = _85encode(b, _a85chars, _a85chars2, pad, True, foldspaces)
|
||||||
|
|
||||||
|
if adobe:
|
||||||
|
result = _A85START + result
|
||||||
|
if wrapcol:
|
||||||
|
wrapcol = max(2 if adobe else 1, wrapcol)
|
||||||
|
chunks = [result[i: i + wrapcol]
|
||||||
|
for i in range(0, len(result), wrapcol)]
|
||||||
|
if adobe:
|
||||||
|
if len(chunks[-1]) + 2 > wrapcol:
|
||||||
|
chunks.append(b'')
|
||||||
|
result = b'\n'.join(chunks)
|
||||||
|
if adobe:
|
||||||
|
result += _A85END
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v'):
|
||||||
|
"""Decode the Ascii85 encoded bytes-like object or ASCII string b.
|
||||||
|
|
||||||
|
foldspaces is a flag that specifies whether the 'y' short sequence should be
|
||||||
|
accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is
|
||||||
|
not supported by the "standard" Adobe encoding.
|
||||||
|
|
||||||
|
adobe controls whether the input sequence is in Adobe Ascii85 format (i.e.
|
||||||
|
is framed with <~ and ~>).
|
||||||
|
|
||||||
|
ignorechars should be a byte string containing characters to ignore from the
|
||||||
|
input. This should only contain whitespace characters, and by default
|
||||||
|
contains all whitespace characters in ASCII.
|
||||||
|
|
||||||
|
The result is returned as a bytes object.
|
||||||
|
"""
|
||||||
|
b = _bytes_from_decode_data(b)
|
||||||
|
if adobe:
|
||||||
|
if not b.endswith(_A85END):
|
||||||
|
raise ValueError(
|
||||||
|
"Ascii85 encoded byte sequences must end "
|
||||||
|
"with {!r}".format(_A85END)
|
||||||
|
)
|
||||||
|
if b.startswith(_A85START):
|
||||||
|
b = b[2:-2] # Strip off start/end markers
|
||||||
|
else:
|
||||||
|
b = b[:-2]
|
||||||
|
#
|
||||||
|
# We have to go through this stepwise, so as to ignore spaces and handle
|
||||||
|
# special short sequences
|
||||||
|
#
|
||||||
|
packI = struct.Struct('!I').pack
|
||||||
|
decoded = []
|
||||||
|
decoded_append = decoded.append
|
||||||
|
curr = []
|
||||||
|
curr_append = curr.append
|
||||||
|
curr_clear = curr.clear
|
||||||
|
for x in b + b'u' * 4:
|
||||||
|
if b'!'[0] <= x <= b'u'[0]:
|
||||||
|
curr_append(x)
|
||||||
|
if len(curr) == 5:
|
||||||
|
acc = 0
|
||||||
|
for x in curr:
|
||||||
|
acc = 85 * acc + (x - 33)
|
||||||
|
try:
|
||||||
|
decoded_append(packI(acc))
|
||||||
|
except struct.error:
|
||||||
|
raise ValueError('Ascii85 overflow') from None
|
||||||
|
curr_clear()
|
||||||
|
elif x == b'z'[0]:
|
||||||
|
if curr:
|
||||||
|
raise ValueError('z inside Ascii85 5-tuple')
|
||||||
|
decoded_append(b'\0\0\0\0')
|
||||||
|
elif foldspaces and x == b'y'[0]:
|
||||||
|
if curr:
|
||||||
|
raise ValueError('y inside Ascii85 5-tuple')
|
||||||
|
decoded_append(b'\x20\x20\x20\x20')
|
||||||
|
elif x in ignorechars:
|
||||||
|
# Skip whitespace
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
raise ValueError('Non-Ascii85 digit found: %c' % x)
|
||||||
|
|
||||||
|
result = b''.join(decoded)
|
||||||
|
padding = 4 - len(curr)
|
||||||
|
if padding:
|
||||||
|
# Throw away the extra padding
|
||||||
|
result = result[:-padding]
|
||||||
|
return result
|
||||||
|
|
||||||
|
# The following code is originally taken (with permission) from Mercurial
|
||||||
|
|
||||||
|
_b85alphabet = (b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||||
|
b"abcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~")
|
||||||
|
_b85chars = None
|
||||||
|
_b85chars2 = None
|
||||||
|
_b85dec = None
|
||||||
|
|
||||||
|
def b85encode(b, pad=False):
|
||||||
|
"""Encode bytes-like object b in base85 format and return a bytes object.
|
||||||
|
|
||||||
|
If pad is true, the input is padded with b'\\0' so its length is a multiple of
|
||||||
|
4 bytes before encoding.
|
||||||
|
"""
|
||||||
|
global _b85chars, _b85chars2
|
||||||
|
# Delay the initialization of tables to not waste memory
|
||||||
|
# if the function is never called
|
||||||
|
if _b85chars is None:
|
||||||
|
_b85chars = [bytes((i,)) for i in _b85alphabet]
|
||||||
|
_b85chars2 = [(a + b) for a in _b85chars for b in _b85chars]
|
||||||
|
return _85encode(b, _b85chars, _b85chars2, pad)
|
||||||
|
|
||||||
|
def b85decode(b):
|
||||||
|
"""Decode the base85-encoded bytes-like object or ASCII string b
|
||||||
|
|
||||||
|
The result is returned as a bytes object.
|
||||||
|
"""
|
||||||
|
global _b85dec
|
||||||
|
# Delay the initialization of tables to not waste memory
|
||||||
|
# if the function is never called
|
||||||
|
if _b85dec is None:
|
||||||
|
_b85dec = [None] * 256
|
||||||
|
for i, c in enumerate(_b85alphabet):
|
||||||
|
_b85dec[c] = i
|
||||||
|
|
||||||
|
b = _bytes_from_decode_data(b)
|
||||||
|
padding = (-len(b)) % 5
|
||||||
|
b = b + b'~' * padding
|
||||||
|
out = []
|
||||||
|
packI = struct.Struct('!I').pack
|
||||||
|
for i in range(0, len(b), 5):
|
||||||
|
chunk = b[i:i + 5]
|
||||||
|
acc = 0
|
||||||
|
try:
|
||||||
|
for c in chunk:
|
||||||
|
acc = acc * 85 + _b85dec[c]
|
||||||
|
except TypeError:
|
||||||
|
for j, c in enumerate(chunk):
|
||||||
|
if _b85dec[c] is None:
|
||||||
|
raise ValueError('bad base85 character at position %d'
|
||||||
|
% (i + j)) from None
|
||||||
|
raise
|
||||||
|
try:
|
||||||
|
out.append(packI(acc))
|
||||||
|
except struct.error:
|
||||||
|
raise ValueError('base85 overflow in hunk starting at byte %d'
|
||||||
|
% i) from None
|
||||||
|
|
||||||
|
result = b''.join(out)
|
||||||
|
if padding:
|
||||||
|
result = result[:-padding]
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Legacy interface. This code could be cleaned up since I don't believe
|
||||||
|
# binascii has any line length limitations. It just doesn't seem worth it
|
||||||
|
# though. The files should be opened in binary mode.
|
||||||
|
|
||||||
|
MAXLINESIZE = 76 # Excluding the CRLF
|
||||||
|
MAXBINSIZE = (MAXLINESIZE//4)*3
|
||||||
|
|
||||||
|
def encode(input, output):
|
||||||
|
"""Encode a file; input and output are binary files."""
|
||||||
|
while True:
|
||||||
|
s = input.read(MAXBINSIZE)
|
||||||
|
if not s:
|
||||||
|
break
|
||||||
|
while len(s) < MAXBINSIZE:
|
||||||
|
ns = input.read(MAXBINSIZE-len(s))
|
||||||
|
if not ns:
|
||||||
|
break
|
||||||
|
s += ns
|
||||||
|
line = binascii.b2a_base64(s)
|
||||||
|
output.write(line)
|
||||||
|
|
||||||
|
|
||||||
|
def decode(input, output):
|
||||||
|
"""Decode a file; input and output are binary files."""
|
||||||
|
while True:
|
||||||
|
line = input.readline()
|
||||||
|
if not line:
|
||||||
|
break
|
||||||
|
s = binascii.a2b_base64(line)
|
||||||
|
output.write(s)
|
||||||
|
|
||||||
|
def _input_type_check(s):
|
||||||
|
try:
|
||||||
|
m = memoryview(s)
|
||||||
|
except TypeError as err:
|
||||||
|
msg = "expected bytes-like object, not %s" % s.__class__.__name__
|
||||||
|
raise TypeError(msg) from err
|
||||||
|
if m.format not in ('c', 'b', 'B'):
|
||||||
|
msg = ("expected single byte elements, not %r from %s" %
|
||||||
|
(m.format, s.__class__.__name__))
|
||||||
|
raise TypeError(msg)
|
||||||
|
if m.ndim != 1:
|
||||||
|
msg = ("expected 1-D data, not %d-D data from %s" %
|
||||||
|
(m.ndim, s.__class__.__name__))
|
||||||
|
raise TypeError(msg)
|
||||||
|
|
||||||
|
|
||||||
|
def encodebytes(s):
|
||||||
|
"""Encode a bytestring into a bytes object containing multiple lines
|
||||||
|
of base-64 data."""
|
||||||
|
_input_type_check(s)
|
||||||
|
pieces = []
|
||||||
|
for i in range(0, len(s), MAXBINSIZE):
|
||||||
|
chunk = s[i : i + MAXBINSIZE]
|
||||||
|
pieces.append(binascii.b2a_base64(chunk))
|
||||||
|
return b"".join(pieces)
|
||||||
|
|
||||||
|
def encodestring(s):
|
||||||
|
"""Legacy alias of encodebytes()."""
|
||||||
|
import warnings
|
||||||
|
warnings.warn("encodestring() is a deprecated alias since 3.1, "
|
||||||
|
"use encodebytes()",
|
||||||
|
DeprecationWarning, 2)
|
||||||
|
return encodebytes(s)
|
||||||
|
|
||||||
|
|
||||||
|
def decodebytes(s):
|
||||||
|
"""Decode a bytestring of base-64 data into a bytes object."""
|
||||||
|
_input_type_check(s)
|
||||||
|
return binascii.a2b_base64(s)
|
||||||
|
|
||||||
|
def decodestring(s):
|
||||||
|
"""Legacy alias of decodebytes()."""
|
||||||
|
import warnings
|
||||||
|
warnings.warn("decodestring() is a deprecated alias since Python 3.1, "
|
||||||
|
"use decodebytes()",
|
||||||
|
DeprecationWarning, 2)
|
||||||
|
return decodebytes(s)
|
||||||
|
|
||||||
|
|
||||||
|
# Usable as a script...
|
||||||
|
def main():
|
||||||
|
"""Small main program"""
|
||||||
|
import sys, getopt
|
||||||
|
try:
|
||||||
|
opts, args = getopt.getopt(sys.argv[1:], 'deut')
|
||||||
|
except getopt.error as msg:
|
||||||
|
sys.stdout = sys.stderr
|
||||||
|
print(msg)
|
||||||
|
print("""usage: %s [-d|-e|-u|-t] [file|-]
|
||||||
|
-d, -u: decode
|
||||||
|
-e: encode (default)
|
||||||
|
-t: encode and decode string 'Aladdin:open sesame'"""%sys.argv[0])
|
||||||
|
sys.exit(2)
|
||||||
|
func = encode
|
||||||
|
for o, a in opts:
|
||||||
|
if o == '-e': func = encode
|
||||||
|
if o == '-d': func = decode
|
||||||
|
if o == '-u': func = decode
|
||||||
|
if o == '-t': test(); return
|
||||||
|
if args and args[0] != '-':
|
||||||
|
with open(args[0], 'rb') as f:
|
||||||
|
func(f, sys.stdout.buffer)
|
||||||
|
else:
|
||||||
|
func(sys.stdin.buffer, sys.stdout.buffer)
|
||||||
|
|
||||||
|
|
||||||
|
def test():
|
||||||
|
s0 = b"Aladdin:open sesame"
|
||||||
|
print(repr(s0))
|
||||||
|
s1 = encodebytes(s0)
|
||||||
|
print(repr(s1))
|
||||||
|
s2 = decodebytes(s1)
|
||||||
|
print(repr(s2))
|
||||||
|
assert s0 == s2
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
869
Lib/bdb.py
Normal file
869
Lib/bdb.py
Normal file
|
@ -0,0 +1,869 @@
|
||||||
|
"""Debugger basics"""
|
||||||
|
|
||||||
|
import fnmatch
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from inspect import CO_GENERATOR, CO_COROUTINE, CO_ASYNC_GENERATOR
|
||||||
|
|
||||||
|
__all__ = ["BdbQuit", "Bdb", "Breakpoint"]
|
||||||
|
|
||||||
|
GENERATOR_AND_COROUTINE_FLAGS = CO_GENERATOR | CO_COROUTINE | CO_ASYNC_GENERATOR
|
||||||
|
|
||||||
|
|
||||||
|
class BdbQuit(Exception):
|
||||||
|
"""Exception to give up completely."""
|
||||||
|
|
||||||
|
|
||||||
|
class Bdb:
|
||||||
|
"""Generic Python debugger base class.
|
||||||
|
|
||||||
|
This class takes care of details of the trace facility;
|
||||||
|
a derived class should implement user interaction.
|
||||||
|
The standard debugger class (pdb.Pdb) is an example.
|
||||||
|
|
||||||
|
The optional skip argument must be an iterable of glob-style
|
||||||
|
module name patterns. The debugger will not step into frames
|
||||||
|
that originate in a module that matches one of these patterns.
|
||||||
|
Whether a frame is considered to originate in a certain module
|
||||||
|
is determined by the __name__ in the frame globals.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, skip=None):
|
||||||
|
self.skip = set(skip) if skip else None
|
||||||
|
self.breaks = {}
|
||||||
|
self.fncache = {}
|
||||||
|
self.frame_returning = None
|
||||||
|
|
||||||
|
def canonic(self, filename):
|
||||||
|
"""Return canonical form of filename.
|
||||||
|
|
||||||
|
For real filenames, the canonical form is a case-normalized (on
|
||||||
|
case insenstive filesystems) absolute path. 'Filenames' with
|
||||||
|
angle brackets, such as "<stdin>", generated in interactive
|
||||||
|
mode, are returned unchanged.
|
||||||
|
"""
|
||||||
|
if filename == "<" + filename[1:-1] + ">":
|
||||||
|
return filename
|
||||||
|
canonic = self.fncache.get(filename)
|
||||||
|
if not canonic:
|
||||||
|
canonic = os.path.abspath(filename)
|
||||||
|
canonic = os.path.normcase(canonic)
|
||||||
|
self.fncache[filename] = canonic
|
||||||
|
return canonic
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
"""Set values of attributes as ready to start debugging."""
|
||||||
|
import linecache
|
||||||
|
linecache.checkcache()
|
||||||
|
self.botframe = None
|
||||||
|
self._set_stopinfo(None, None)
|
||||||
|
|
||||||
|
def trace_dispatch(self, frame, event, arg):
|
||||||
|
"""Dispatch a trace function for debugged frames based on the event.
|
||||||
|
|
||||||
|
This function is installed as the trace function for debugged
|
||||||
|
frames. Its return value is the new trace function, which is
|
||||||
|
usually itself. The default implementation decides how to
|
||||||
|
dispatch a frame, depending on the type of event (passed in as a
|
||||||
|
string) that is about to be executed.
|
||||||
|
|
||||||
|
The event can be one of the following:
|
||||||
|
line: A new line of code is going to be executed.
|
||||||
|
call: A function is about to be called or another code block
|
||||||
|
is entered.
|
||||||
|
return: A function or other code block is about to return.
|
||||||
|
exception: An exception has occurred.
|
||||||
|
c_call: A C function is about to be called.
|
||||||
|
c_return: A C function has returned.
|
||||||
|
c_exception: A C function has raised an exception.
|
||||||
|
|
||||||
|
For the Python events, specialized functions (see the dispatch_*()
|
||||||
|
methods) are called. For the C events, no action is taken.
|
||||||
|
|
||||||
|
The arg parameter depends on the previous event.
|
||||||
|
"""
|
||||||
|
if self.quitting:
|
||||||
|
return # None
|
||||||
|
if event == 'line':
|
||||||
|
return self.dispatch_line(frame)
|
||||||
|
if event == 'call':
|
||||||
|
return self.dispatch_call(frame, arg)
|
||||||
|
if event == 'return':
|
||||||
|
return self.dispatch_return(frame, arg)
|
||||||
|
if event == 'exception':
|
||||||
|
return self.dispatch_exception(frame, arg)
|
||||||
|
if event == 'c_call':
|
||||||
|
return self.trace_dispatch
|
||||||
|
if event == 'c_exception':
|
||||||
|
return self.trace_dispatch
|
||||||
|
if event == 'c_return':
|
||||||
|
return self.trace_dispatch
|
||||||
|
print('bdb.Bdb.dispatch: unknown debugging event:', repr(event))
|
||||||
|
return self.trace_dispatch
|
||||||
|
|
||||||
|
def dispatch_line(self, frame):
|
||||||
|
"""Invoke user function and return trace function for line event.
|
||||||
|
|
||||||
|
If the debugger stops on the current line, invoke
|
||||||
|
self.user_line(). Raise BdbQuit if self.quitting is set.
|
||||||
|
Return self.trace_dispatch to continue tracing in this scope.
|
||||||
|
"""
|
||||||
|
if self.stop_here(frame) or self.break_here(frame):
|
||||||
|
self.user_line(frame)
|
||||||
|
if self.quitting: raise BdbQuit
|
||||||
|
return self.trace_dispatch
|
||||||
|
|
||||||
|
def dispatch_call(self, frame, arg):
|
||||||
|
"""Invoke user function and return trace function for call event.
|
||||||
|
|
||||||
|
If the debugger stops on this function call, invoke
|
||||||
|
self.user_call(). Raise BbdQuit if self.quitting is set.
|
||||||
|
Return self.trace_dispatch to continue tracing in this scope.
|
||||||
|
"""
|
||||||
|
# XXX 'arg' is no longer used
|
||||||
|
if self.botframe is None:
|
||||||
|
# First call of dispatch since reset()
|
||||||
|
self.botframe = frame.f_back # (CT) Note that this may also be None!
|
||||||
|
return self.trace_dispatch
|
||||||
|
if not (self.stop_here(frame) or self.break_anywhere(frame)):
|
||||||
|
# No need to trace this function
|
||||||
|
return # None
|
||||||
|
# Ignore call events in generator except when stepping.
|
||||||
|
if self.stopframe and frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||||
|
return self.trace_dispatch
|
||||||
|
self.user_call(frame, arg)
|
||||||
|
if self.quitting: raise BdbQuit
|
||||||
|
return self.trace_dispatch
|
||||||
|
|
||||||
|
def dispatch_return(self, frame, arg):
|
||||||
|
"""Invoke user function and return trace function for return event.
|
||||||
|
|
||||||
|
If the debugger stops on this function return, invoke
|
||||||
|
self.user_return(). Raise BdbQuit if self.quitting is set.
|
||||||
|
Return self.trace_dispatch to continue tracing in this scope.
|
||||||
|
"""
|
||||||
|
if self.stop_here(frame) or frame == self.returnframe:
|
||||||
|
# Ignore return events in generator except when stepping.
|
||||||
|
if self.stopframe and frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||||
|
return self.trace_dispatch
|
||||||
|
try:
|
||||||
|
self.frame_returning = frame
|
||||||
|
self.user_return(frame, arg)
|
||||||
|
finally:
|
||||||
|
self.frame_returning = None
|
||||||
|
if self.quitting: raise BdbQuit
|
||||||
|
# The user issued a 'next' or 'until' command.
|
||||||
|
if self.stopframe is frame and self.stoplineno != -1:
|
||||||
|
self._set_stopinfo(None, None)
|
||||||
|
return self.trace_dispatch
|
||||||
|
|
||||||
|
def dispatch_exception(self, frame, arg):
|
||||||
|
"""Invoke user function and return trace function for exception event.
|
||||||
|
|
||||||
|
If the debugger stops on this exception, invoke
|
||||||
|
self.user_exception(). Raise BdbQuit if self.quitting is set.
|
||||||
|
Return self.trace_dispatch to continue tracing in this scope.
|
||||||
|
"""
|
||||||
|
if self.stop_here(frame):
|
||||||
|
# When stepping with next/until/return in a generator frame, skip
|
||||||
|
# the internal StopIteration exception (with no traceback)
|
||||||
|
# triggered by a subiterator run with the 'yield from' statement.
|
||||||
|
if not (frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS
|
||||||
|
and arg[0] is StopIteration and arg[2] is None):
|
||||||
|
self.user_exception(frame, arg)
|
||||||
|
if self.quitting: raise BdbQuit
|
||||||
|
# Stop at the StopIteration or GeneratorExit exception when the user
|
||||||
|
# has set stopframe in a generator by issuing a return command, or a
|
||||||
|
# next/until command at the last statement in the generator before the
|
||||||
|
# exception.
|
||||||
|
elif (self.stopframe and frame is not self.stopframe
|
||||||
|
and self.stopframe.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS
|
||||||
|
and arg[0] in (StopIteration, GeneratorExit)):
|
||||||
|
self.user_exception(frame, arg)
|
||||||
|
if self.quitting: raise BdbQuit
|
||||||
|
|
||||||
|
return self.trace_dispatch
|
||||||
|
|
||||||
|
# Normally derived classes don't override the following
|
||||||
|
# methods, but they may if they want to redefine the
|
||||||
|
# definition of stopping and breakpoints.
|
||||||
|
|
||||||
|
def is_skipped_module(self, module_name):
|
||||||
|
"Return True if module_name matches any skip pattern."
|
||||||
|
for pattern in self.skip:
|
||||||
|
if fnmatch.fnmatch(module_name, pattern):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def stop_here(self, frame):
|
||||||
|
"Return True if frame is below the starting frame in the stack."
|
||||||
|
# (CT) stopframe may now also be None, see dispatch_call.
|
||||||
|
# (CT) the former test for None is therefore removed from here.
|
||||||
|
if self.skip and \
|
||||||
|
self.is_skipped_module(frame.f_globals.get('__name__')):
|
||||||
|
return False
|
||||||
|
if frame is self.stopframe:
|
||||||
|
if self.stoplineno == -1:
|
||||||
|
return False
|
||||||
|
return frame.f_lineno >= self.stoplineno
|
||||||
|
if not self.stopframe:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def break_here(self, frame):
|
||||||
|
"""Return True if there is an effective breakpoint for this line.
|
||||||
|
|
||||||
|
Check for line or function breakpoint and if in effect.
|
||||||
|
Delete temporary breakpoints if effective() says to.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(frame.f_code.co_filename)
|
||||||
|
if filename not in self.breaks:
|
||||||
|
return False
|
||||||
|
lineno = frame.f_lineno
|
||||||
|
if lineno not in self.breaks[filename]:
|
||||||
|
# The line itself has no breakpoint, but maybe the line is the
|
||||||
|
# first line of a function with breakpoint set by function name.
|
||||||
|
lineno = frame.f_code.co_firstlineno
|
||||||
|
if lineno not in self.breaks[filename]:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# flag says ok to delete temp. bp
|
||||||
|
(bp, flag) = effective(filename, lineno, frame)
|
||||||
|
if bp:
|
||||||
|
self.currentbp = bp.number
|
||||||
|
if (flag and bp.temporary):
|
||||||
|
self.do_clear(str(bp.number))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def do_clear(self, arg):
|
||||||
|
"""Remove temporary breakpoint.
|
||||||
|
|
||||||
|
Must implement in derived classes or get NotImplementedError.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError("subclass of bdb must implement do_clear()")
|
||||||
|
|
||||||
|
def break_anywhere(self, frame):
|
||||||
|
"""Return True if there is any breakpoint for frame's filename.
|
||||||
|
"""
|
||||||
|
return self.canonic(frame.f_code.co_filename) in self.breaks
|
||||||
|
|
||||||
|
# Derived classes should override the user_* methods
|
||||||
|
# to gain control.
|
||||||
|
|
||||||
|
def user_call(self, frame, argument_list):
|
||||||
|
"""Called if we might stop in a function."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def user_line(self, frame):
|
||||||
|
"""Called when we stop or break at a line."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def user_return(self, frame, return_value):
|
||||||
|
"""Called when a return trap is set here."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def user_exception(self, frame, exc_info):
|
||||||
|
"""Called when we stop on an exception."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _set_stopinfo(self, stopframe, returnframe, stoplineno=0):
|
||||||
|
"""Set the attributes for stopping.
|
||||||
|
|
||||||
|
If stoplineno is greater than or equal to 0, then stop at line
|
||||||
|
greater than or equal to the stopline. If stoplineno is -1, then
|
||||||
|
don't stop at all.
|
||||||
|
"""
|
||||||
|
self.stopframe = stopframe
|
||||||
|
self.returnframe = returnframe
|
||||||
|
self.quitting = False
|
||||||
|
# stoplineno >= 0 means: stop at line >= the stoplineno
|
||||||
|
# stoplineno -1 means: don't stop at all
|
||||||
|
self.stoplineno = stoplineno
|
||||||
|
|
||||||
|
# Derived classes and clients can call the following methods
|
||||||
|
# to affect the stepping state.
|
||||||
|
|
||||||
|
def set_until(self, frame, lineno=None):
|
||||||
|
"""Stop when the line with the lineno greater than the current one is
|
||||||
|
reached or when returning from current frame."""
|
||||||
|
# the name "until" is borrowed from gdb
|
||||||
|
if lineno is None:
|
||||||
|
lineno = frame.f_lineno + 1
|
||||||
|
self._set_stopinfo(frame, frame, lineno)
|
||||||
|
|
||||||
|
def set_step(self):
|
||||||
|
"""Stop after one line of code."""
|
||||||
|
# Issue #13183: pdb skips frames after hitting a breakpoint and running
|
||||||
|
# step commands.
|
||||||
|
# Restore the trace function in the caller (that may not have been set
|
||||||
|
# for performance reasons) when returning from the current frame.
|
||||||
|
if self.frame_returning:
|
||||||
|
caller_frame = self.frame_returning.f_back
|
||||||
|
if caller_frame and not caller_frame.f_trace:
|
||||||
|
caller_frame.f_trace = self.trace_dispatch
|
||||||
|
self._set_stopinfo(None, None)
|
||||||
|
|
||||||
|
def set_next(self, frame):
|
||||||
|
"""Stop on the next line in or below the given frame."""
|
||||||
|
self._set_stopinfo(frame, None)
|
||||||
|
|
||||||
|
def set_return(self, frame):
|
||||||
|
"""Stop when returning from the given frame."""
|
||||||
|
if frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||||
|
self._set_stopinfo(frame, None, -1)
|
||||||
|
else:
|
||||||
|
self._set_stopinfo(frame.f_back, frame)
|
||||||
|
|
||||||
|
def set_trace(self, frame=None):
|
||||||
|
"""Start debugging from frame.
|
||||||
|
|
||||||
|
If frame is not specified, debugging starts from caller's frame.
|
||||||
|
"""
|
||||||
|
if frame is None:
|
||||||
|
frame = sys._getframe().f_back
|
||||||
|
self.reset()
|
||||||
|
while frame:
|
||||||
|
frame.f_trace = self.trace_dispatch
|
||||||
|
self.botframe = frame
|
||||||
|
frame = frame.f_back
|
||||||
|
self.set_step()
|
||||||
|
sys.settrace(self.trace_dispatch)
|
||||||
|
|
||||||
|
def set_continue(self):
|
||||||
|
"""Stop only at breakpoints or when finished.
|
||||||
|
|
||||||
|
If there are no breakpoints, set the system trace function to None.
|
||||||
|
"""
|
||||||
|
# Don't stop except at breakpoints or when finished
|
||||||
|
self._set_stopinfo(self.botframe, None, -1)
|
||||||
|
if not self.breaks:
|
||||||
|
# no breakpoints; run without debugger overhead
|
||||||
|
sys.settrace(None)
|
||||||
|
frame = sys._getframe().f_back
|
||||||
|
while frame and frame is not self.botframe:
|
||||||
|
del frame.f_trace
|
||||||
|
frame = frame.f_back
|
||||||
|
|
||||||
|
def set_quit(self):
|
||||||
|
"""Set quitting attribute to True.
|
||||||
|
|
||||||
|
Raises BdbQuit exception in the next call to a dispatch_*() method.
|
||||||
|
"""
|
||||||
|
self.stopframe = self.botframe
|
||||||
|
self.returnframe = None
|
||||||
|
self.quitting = True
|
||||||
|
sys.settrace(None)
|
||||||
|
|
||||||
|
# Derived classes and clients can call the following methods
|
||||||
|
# to manipulate breakpoints. These methods return an
|
||||||
|
# error message if something went wrong, None if all is well.
|
||||||
|
# Set_break prints out the breakpoint line and file:lineno.
|
||||||
|
# Call self.get_*break*() to see the breakpoints or better
|
||||||
|
# for bp in Breakpoint.bpbynumber: if bp: bp.bpprint().
|
||||||
|
|
||||||
|
def set_break(self, filename, lineno, temporary=False, cond=None,
|
||||||
|
funcname=None):
|
||||||
|
"""Set a new breakpoint for filename:lineno.
|
||||||
|
|
||||||
|
If lineno doesn't exist for the filename, return an error message.
|
||||||
|
The filename should be in canonical form.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
import linecache # Import as late as possible
|
||||||
|
line = linecache.getline(filename, lineno)
|
||||||
|
if not line:
|
||||||
|
return 'Line %s:%d does not exist' % (filename, lineno)
|
||||||
|
list = self.breaks.setdefault(filename, [])
|
||||||
|
if lineno not in list:
|
||||||
|
list.append(lineno)
|
||||||
|
bp = Breakpoint(filename, lineno, temporary, cond, funcname)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _prune_breaks(self, filename, lineno):
|
||||||
|
"""Prune breakpoints for filname:lineno.
|
||||||
|
|
||||||
|
A list of breakpoints is maintained in the Bdb instance and in
|
||||||
|
the Breakpoint class. If a breakpoint in the Bdb instance no
|
||||||
|
longer exists in the Breakpoint class, then it's removed from the
|
||||||
|
Bdb instance.
|
||||||
|
"""
|
||||||
|
if (filename, lineno) not in Breakpoint.bplist:
|
||||||
|
self.breaks[filename].remove(lineno)
|
||||||
|
if not self.breaks[filename]:
|
||||||
|
del self.breaks[filename]
|
||||||
|
|
||||||
|
def clear_break(self, filename, lineno):
|
||||||
|
"""Delete breakpoints for filename:lineno.
|
||||||
|
|
||||||
|
If no breakpoints were set, return an error message.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
if filename not in self.breaks:
|
||||||
|
return 'There are no breakpoints in %s' % filename
|
||||||
|
if lineno not in self.breaks[filename]:
|
||||||
|
return 'There is no breakpoint at %s:%d' % (filename, lineno)
|
||||||
|
# If there's only one bp in the list for that file,line
|
||||||
|
# pair, then remove the breaks entry
|
||||||
|
for bp in Breakpoint.bplist[filename, lineno][:]:
|
||||||
|
bp.deleteMe()
|
||||||
|
self._prune_breaks(filename, lineno)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def clear_bpbynumber(self, arg):
|
||||||
|
"""Delete a breakpoint by its index in Breakpoint.bpbynumber.
|
||||||
|
|
||||||
|
If arg is invalid, return an error message.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
bp = self.get_bpbynumber(arg)
|
||||||
|
except ValueError as err:
|
||||||
|
return str(err)
|
||||||
|
bp.deleteMe()
|
||||||
|
self._prune_breaks(bp.file, bp.line)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def clear_all_file_breaks(self, filename):
|
||||||
|
"""Delete all breakpoints in filename.
|
||||||
|
|
||||||
|
If none were set, return an error message.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
if filename not in self.breaks:
|
||||||
|
return 'There are no breakpoints in %s' % filename
|
||||||
|
for line in self.breaks[filename]:
|
||||||
|
blist = Breakpoint.bplist[filename, line]
|
||||||
|
for bp in blist:
|
||||||
|
bp.deleteMe()
|
||||||
|
del self.breaks[filename]
|
||||||
|
return None
|
||||||
|
|
||||||
|
def clear_all_breaks(self):
|
||||||
|
"""Delete all existing breakpoints.
|
||||||
|
|
||||||
|
If none were set, return an error message.
|
||||||
|
"""
|
||||||
|
if not self.breaks:
|
||||||
|
return 'There are no breakpoints'
|
||||||
|
for bp in Breakpoint.bpbynumber:
|
||||||
|
if bp:
|
||||||
|
bp.deleteMe()
|
||||||
|
self.breaks = {}
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_bpbynumber(self, arg):
|
||||||
|
"""Return a breakpoint by its index in Breakpoint.bybpnumber.
|
||||||
|
|
||||||
|
For invalid arg values or if the breakpoint doesn't exist,
|
||||||
|
raise a ValueError.
|
||||||
|
"""
|
||||||
|
if not arg:
|
||||||
|
raise ValueError('Breakpoint number expected')
|
||||||
|
try:
|
||||||
|
number = int(arg)
|
||||||
|
except ValueError:
|
||||||
|
raise ValueError('Non-numeric breakpoint number %s' % arg) from None
|
||||||
|
try:
|
||||||
|
bp = Breakpoint.bpbynumber[number]
|
||||||
|
except IndexError:
|
||||||
|
raise ValueError('Breakpoint number %d out of range' % number) from None
|
||||||
|
if bp is None:
|
||||||
|
raise ValueError('Breakpoint %d already deleted' % number)
|
||||||
|
return bp
|
||||||
|
|
||||||
|
def get_break(self, filename, lineno):
|
||||||
|
"""Return True if there is a breakpoint for filename:lineno."""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
return filename in self.breaks and \
|
||||||
|
lineno in self.breaks[filename]
|
||||||
|
|
||||||
|
def get_breaks(self, filename, lineno):
|
||||||
|
"""Return all breakpoints for filename:lineno.
|
||||||
|
|
||||||
|
If no breakpoints are set, return an empty list.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
return filename in self.breaks and \
|
||||||
|
lineno in self.breaks[filename] and \
|
||||||
|
Breakpoint.bplist[filename, lineno] or []
|
||||||
|
|
||||||
|
def get_file_breaks(self, filename):
|
||||||
|
"""Return all lines with breakpoints for filename.
|
||||||
|
|
||||||
|
If no breakpoints are set, return an empty list.
|
||||||
|
"""
|
||||||
|
filename = self.canonic(filename)
|
||||||
|
if filename in self.breaks:
|
||||||
|
return self.breaks[filename]
|
||||||
|
else:
|
||||||
|
return []
|
||||||
|
|
||||||
|
def get_all_breaks(self):
|
||||||
|
"""Return all breakpoints that are set."""
|
||||||
|
return self.breaks
|
||||||
|
|
||||||
|
# Derived classes and clients can call the following method
|
||||||
|
# to get a data structure representing a stack trace.
|
||||||
|
|
||||||
|
def get_stack(self, f, t):
|
||||||
|
"""Return a list of (frame, lineno) in a stack trace and a size.
|
||||||
|
|
||||||
|
List starts with original calling frame, if there is one.
|
||||||
|
Size may be number of frames above or below f.
|
||||||
|
"""
|
||||||
|
stack = []
|
||||||
|
if t and t.tb_frame is f:
|
||||||
|
t = t.tb_next
|
||||||
|
while f is not None:
|
||||||
|
stack.append((f, f.f_lineno))
|
||||||
|
if f is self.botframe:
|
||||||
|
break
|
||||||
|
f = f.f_back
|
||||||
|
stack.reverse()
|
||||||
|
i = max(0, len(stack) - 1)
|
||||||
|
while t is not None:
|
||||||
|
stack.append((t.tb_frame, t.tb_lineno))
|
||||||
|
t = t.tb_next
|
||||||
|
if f is None:
|
||||||
|
i = max(0, len(stack) - 1)
|
||||||
|
return stack, i
|
||||||
|
|
||||||
|
def format_stack_entry(self, frame_lineno, lprefix=': '):
|
||||||
|
"""Return a string with information about a stack entry.
|
||||||
|
|
||||||
|
The stack entry frame_lineno is a (frame, lineno) tuple. The
|
||||||
|
return string contains the canonical filename, the function name
|
||||||
|
or '<lambda>', the input arguments, the return value, and the
|
||||||
|
line of code (if it exists).
|
||||||
|
|
||||||
|
"""
|
||||||
|
import linecache, reprlib
|
||||||
|
frame, lineno = frame_lineno
|
||||||
|
filename = self.canonic(frame.f_code.co_filename)
|
||||||
|
s = '%s(%r)' % (filename, lineno)
|
||||||
|
if frame.f_code.co_name:
|
||||||
|
s += frame.f_code.co_name
|
||||||
|
else:
|
||||||
|
s += "<lambda>"
|
||||||
|
if '__args__' in frame.f_locals:
|
||||||
|
args = frame.f_locals['__args__']
|
||||||
|
else:
|
||||||
|
args = None
|
||||||
|
if args:
|
||||||
|
s += reprlib.repr(args)
|
||||||
|
else:
|
||||||
|
s += '()'
|
||||||
|
if '__return__' in frame.f_locals:
|
||||||
|
rv = frame.f_locals['__return__']
|
||||||
|
s += '->'
|
||||||
|
s += reprlib.repr(rv)
|
||||||
|
line = linecache.getline(filename, lineno, frame.f_globals)
|
||||||
|
if line:
|
||||||
|
s += lprefix + line.strip()
|
||||||
|
return s
|
||||||
|
|
||||||
|
# The following methods can be called by clients to use
|
||||||
|
# a debugger to debug a statement or an expression.
|
||||||
|
# Both can be given as a string, or a code object.
|
||||||
|
|
||||||
|
def run(self, cmd, globals=None, locals=None):
|
||||||
|
"""Debug a statement executed via the exec() function.
|
||||||
|
|
||||||
|
globals defaults to __main__.dict; locals defaults to globals.
|
||||||
|
"""
|
||||||
|
if globals is None:
|
||||||
|
import __main__
|
||||||
|
globals = __main__.__dict__
|
||||||
|
if locals is None:
|
||||||
|
locals = globals
|
||||||
|
self.reset()
|
||||||
|
if isinstance(cmd, str):
|
||||||
|
cmd = compile(cmd, "<string>", "exec")
|
||||||
|
sys.settrace(self.trace_dispatch)
|
||||||
|
try:
|
||||||
|
exec(cmd, globals, locals)
|
||||||
|
except BdbQuit:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
self.quitting = True
|
||||||
|
sys.settrace(None)
|
||||||
|
|
||||||
|
def runeval(self, expr, globals=None, locals=None):
|
||||||
|
"""Debug an expression executed via the eval() function.
|
||||||
|
|
||||||
|
globals defaults to __main__.dict; locals defaults to globals.
|
||||||
|
"""
|
||||||
|
if globals is None:
|
||||||
|
import __main__
|
||||||
|
globals = __main__.__dict__
|
||||||
|
if locals is None:
|
||||||
|
locals = globals
|
||||||
|
self.reset()
|
||||||
|
sys.settrace(self.trace_dispatch)
|
||||||
|
try:
|
||||||
|
return eval(expr, globals, locals)
|
||||||
|
except BdbQuit:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
self.quitting = True
|
||||||
|
sys.settrace(None)
|
||||||
|
|
||||||
|
def runctx(self, cmd, globals, locals):
|
||||||
|
"""For backwards-compatibility. Defers to run()."""
|
||||||
|
# B/W compatibility
|
||||||
|
self.run(cmd, globals, locals)
|
||||||
|
|
||||||
|
# This method is more useful to debug a single function call.
|
||||||
|
|
||||||
|
def runcall(self, func, *args, **kwds):
|
||||||
|
"""Debug a single function call.
|
||||||
|
|
||||||
|
Return the result of the function call.
|
||||||
|
"""
|
||||||
|
self.reset()
|
||||||
|
sys.settrace(self.trace_dispatch)
|
||||||
|
res = None
|
||||||
|
try:
|
||||||
|
res = func(*args, **kwds)
|
||||||
|
except BdbQuit:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
self.quitting = True
|
||||||
|
sys.settrace(None)
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
def set_trace():
|
||||||
|
"""Start debugging with a Bdb instance from the caller's frame."""
|
||||||
|
Bdb().set_trace()
|
||||||
|
|
||||||
|
|
||||||
|
class Breakpoint:
|
||||||
|
"""Breakpoint class.
|
||||||
|
|
||||||
|
Implements temporary breakpoints, ignore counts, disabling and
|
||||||
|
(re)-enabling, and conditionals.
|
||||||
|
|
||||||
|
Breakpoints are indexed by number through bpbynumber and by
|
||||||
|
the (file, line) tuple using bplist. The former points to a
|
||||||
|
single instance of class Breakpoint. The latter points to a
|
||||||
|
list of such instances since there may be more than one
|
||||||
|
breakpoint per line.
|
||||||
|
|
||||||
|
When creating a breakpoint, its associated filename should be
|
||||||
|
in canonical form. If funcname is defined, a breakpoint hit will be
|
||||||
|
counted when the first line of that function is executed. A
|
||||||
|
conditional breakpoint always counts a hit.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# XXX Keeping state in the class is a mistake -- this means
|
||||||
|
# you cannot have more than one active Bdb instance.
|
||||||
|
|
||||||
|
next = 1 # Next bp to be assigned
|
||||||
|
bplist = {} # indexed by (file, lineno) tuple
|
||||||
|
bpbynumber = [None] # Each entry is None or an instance of Bpt
|
||||||
|
# index 0 is unused, except for marking an
|
||||||
|
# effective break .... see effective()
|
||||||
|
|
||||||
|
def __init__(self, file, line, temporary=False, cond=None, funcname=None):
|
||||||
|
self.funcname = funcname
|
||||||
|
# Needed if funcname is not None.
|
||||||
|
self.func_first_executable_line = None
|
||||||
|
self.file = file # This better be in canonical form!
|
||||||
|
self.line = line
|
||||||
|
self.temporary = temporary
|
||||||
|
self.cond = cond
|
||||||
|
self.enabled = True
|
||||||
|
self.ignore = 0
|
||||||
|
self.hits = 0
|
||||||
|
self.number = Breakpoint.next
|
||||||
|
Breakpoint.next += 1
|
||||||
|
# Build the two lists
|
||||||
|
self.bpbynumber.append(self)
|
||||||
|
if (file, line) in self.bplist:
|
||||||
|
self.bplist[file, line].append(self)
|
||||||
|
else:
|
||||||
|
self.bplist[file, line] = [self]
|
||||||
|
|
||||||
|
def deleteMe(self):
|
||||||
|
"""Delete the breakpoint from the list associated to a file:line.
|
||||||
|
|
||||||
|
If it is the last breakpoint in that position, it also deletes
|
||||||
|
the entry for the file:line.
|
||||||
|
"""
|
||||||
|
|
||||||
|
index = (self.file, self.line)
|
||||||
|
self.bpbynumber[self.number] = None # No longer in list
|
||||||
|
self.bplist[index].remove(self)
|
||||||
|
if not self.bplist[index]:
|
||||||
|
# No more bp for this f:l combo
|
||||||
|
del self.bplist[index]
|
||||||
|
|
||||||
|
def enable(self):
|
||||||
|
"""Mark the breakpoint as enabled."""
|
||||||
|
self.enabled = True
|
||||||
|
|
||||||
|
def disable(self):
|
||||||
|
"""Mark the breakpoint as disabled."""
|
||||||
|
self.enabled = False
|
||||||
|
|
||||||
|
def bpprint(self, out=None):
|
||||||
|
"""Print the output of bpformat().
|
||||||
|
|
||||||
|
The optional out argument directs where the output is sent
|
||||||
|
and defaults to standard output.
|
||||||
|
"""
|
||||||
|
if out is None:
|
||||||
|
out = sys.stdout
|
||||||
|
print(self.bpformat(), file=out)
|
||||||
|
|
||||||
|
def bpformat(self):
|
||||||
|
"""Return a string with information about the breakpoint.
|
||||||
|
|
||||||
|
The information includes the breakpoint number, temporary
|
||||||
|
status, file:line position, break condition, number of times to
|
||||||
|
ignore, and number of times hit.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if self.temporary:
|
||||||
|
disp = 'del '
|
||||||
|
else:
|
||||||
|
disp = 'keep '
|
||||||
|
if self.enabled:
|
||||||
|
disp = disp + 'yes '
|
||||||
|
else:
|
||||||
|
disp = disp + 'no '
|
||||||
|
ret = '%-4dbreakpoint %s at %s:%d' % (self.number, disp,
|
||||||
|
self.file, self.line)
|
||||||
|
if self.cond:
|
||||||
|
ret += '\n\tstop only if %s' % (self.cond,)
|
||||||
|
if self.ignore:
|
||||||
|
ret += '\n\tignore next %d hits' % (self.ignore,)
|
||||||
|
if self.hits:
|
||||||
|
if self.hits > 1:
|
||||||
|
ss = 's'
|
||||||
|
else:
|
||||||
|
ss = ''
|
||||||
|
ret += '\n\tbreakpoint already hit %d time%s' % (self.hits, ss)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
"Return a condensed description of the breakpoint."
|
||||||
|
return 'breakpoint %s at %s:%s' % (self.number, self.file, self.line)
|
||||||
|
|
||||||
|
# -----------end of Breakpoint class----------
|
||||||
|
|
||||||
|
|
||||||
|
def checkfuncname(b, frame):
|
||||||
|
"""Return True if break should happen here.
|
||||||
|
|
||||||
|
Whether a break should happen depends on the way that b (the breakpoint)
|
||||||
|
was set. If it was set via line number, check if b.line is the same as
|
||||||
|
the one in the frame. If it was set via function name, check if this is
|
||||||
|
the right function and if it is on the first executable line.
|
||||||
|
"""
|
||||||
|
if not b.funcname:
|
||||||
|
# Breakpoint was set via line number.
|
||||||
|
if b.line != frame.f_lineno:
|
||||||
|
# Breakpoint was set at a line with a def statement and the function
|
||||||
|
# defined is called: don't break.
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Breakpoint set via function name.
|
||||||
|
if frame.f_code.co_name != b.funcname:
|
||||||
|
# It's not a function call, but rather execution of def statement.
|
||||||
|
return False
|
||||||
|
|
||||||
|
# We are in the right frame.
|
||||||
|
if not b.func_first_executable_line:
|
||||||
|
# The function is entered for the 1st time.
|
||||||
|
b.func_first_executable_line = frame.f_lineno
|
||||||
|
|
||||||
|
if b.func_first_executable_line != frame.f_lineno:
|
||||||
|
# But we are not at the first line number: don't break.
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
# Determines if there is an effective (active) breakpoint at this
|
||||||
|
# line of code. Returns breakpoint number or 0 if none
|
||||||
|
def effective(file, line, frame):
|
||||||
|
"""Determine which breakpoint for this file:line is to be acted upon.
|
||||||
|
|
||||||
|
Called only if we know there is a breakpoint at this location. Return
|
||||||
|
the breakpoint that was triggered and a boolean that indicates if it is
|
||||||
|
ok to delete a temporary breakpoint. Return (None, None) if there is no
|
||||||
|
matching breakpoint.
|
||||||
|
"""
|
||||||
|
possibles = Breakpoint.bplist[file, line]
|
||||||
|
for b in possibles:
|
||||||
|
if not b.enabled:
|
||||||
|
continue
|
||||||
|
if not checkfuncname(b, frame):
|
||||||
|
continue
|
||||||
|
# Count every hit when bp is enabled
|
||||||
|
b.hits += 1
|
||||||
|
if not b.cond:
|
||||||
|
# If unconditional, and ignoring go on to next, else break
|
||||||
|
if b.ignore > 0:
|
||||||
|
b.ignore -= 1
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
# breakpoint and marker that it's ok to delete if temporary
|
||||||
|
return (b, True)
|
||||||
|
else:
|
||||||
|
# Conditional bp.
|
||||||
|
# Ignore count applies only to those bpt hits where the
|
||||||
|
# condition evaluates to true.
|
||||||
|
try:
|
||||||
|
val = eval(b.cond, frame.f_globals, frame.f_locals)
|
||||||
|
if val:
|
||||||
|
if b.ignore > 0:
|
||||||
|
b.ignore -= 1
|
||||||
|
# continue
|
||||||
|
else:
|
||||||
|
return (b, True)
|
||||||
|
# else:
|
||||||
|
# continue
|
||||||
|
except:
|
||||||
|
# if eval fails, most conservative thing is to stop on
|
||||||
|
# breakpoint regardless of ignore count. Don't delete
|
||||||
|
# temporary, as another hint to user.
|
||||||
|
return (b, False)
|
||||||
|
return (None, None)
|
||||||
|
|
||||||
|
|
||||||
|
# -------------------- testing --------------------
|
||||||
|
|
||||||
|
class Tdb(Bdb):
|
||||||
|
def user_call(self, frame, args):
|
||||||
|
name = frame.f_code.co_name
|
||||||
|
if not name: name = '???'
|
||||||
|
print('+++ call', name, args)
|
||||||
|
def user_line(self, frame):
|
||||||
|
import linecache
|
||||||
|
name = frame.f_code.co_name
|
||||||
|
if not name: name = '???'
|
||||||
|
fn = self.canonic(frame.f_code.co_filename)
|
||||||
|
line = linecache.getline(fn, frame.f_lineno, frame.f_globals)
|
||||||
|
print('+++', fn, frame.f_lineno, name, ':', line.strip())
|
||||||
|
def user_return(self, frame, retval):
|
||||||
|
print('+++ return', retval)
|
||||||
|
def user_exception(self, frame, exc_stuff):
|
||||||
|
print('+++ exception', exc_stuff)
|
||||||
|
self.set_continue()
|
||||||
|
|
||||||
|
def foo(n):
|
||||||
|
print('foo(', n, ')')
|
||||||
|
x = bar(n*10)
|
||||||
|
print('bar returned', x)
|
||||||
|
|
||||||
|
def bar(a):
|
||||||
|
print('bar(', a, ')')
|
||||||
|
return a/2
|
||||||
|
|
||||||
|
def test():
|
||||||
|
t = Tdb()
|
||||||
|
t.run('import bdb; bdb.foo(10)')
|
479
Lib/binhex.py
Normal file
479
Lib/binhex.py
Normal file
|
@ -0,0 +1,479 @@
|
||||||
|
"""Macintosh binhex compression/decompression.
|
||||||
|
|
||||||
|
easy interface:
|
||||||
|
binhex(inputfilename, outputfilename)
|
||||||
|
hexbin(inputfilename, outputfilename)
|
||||||
|
"""
|
||||||
|
|
||||||
|
#
|
||||||
|
# Jack Jansen, CWI, August 1995.
|
||||||
|
#
|
||||||
|
# The module is supposed to be as compatible as possible. Especially the
|
||||||
|
# easy interface should work "as expected" on any platform.
|
||||||
|
# XXXX Note: currently, textfiles appear in mac-form on all platforms.
|
||||||
|
# We seem to lack a simple character-translate in python.
|
||||||
|
# (we should probably use ISO-Latin-1 on all but the mac platform).
|
||||||
|
# XXXX The simple routines are too simple: they expect to hold the complete
|
||||||
|
# files in-core. Should be fixed.
|
||||||
|
# XXXX It would be nice to handle AppleDouble format on unix
|
||||||
|
# (for servers serving macs).
|
||||||
|
# XXXX I don't understand what happens when you get 0x90 times the same byte on
|
||||||
|
# input. The resulting code (xx 90 90) would appear to be interpreted as an
|
||||||
|
# escaped *value* of 0x90. All coders I've seen appear to ignore this nicety...
|
||||||
|
#
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import struct
|
||||||
|
import binascii
|
||||||
|
|
||||||
|
__all__ = ["binhex","hexbin","Error"]
|
||||||
|
|
||||||
|
class Error(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# States (what have we written)
|
||||||
|
_DID_HEADER = 0
|
||||||
|
_DID_DATA = 1
|
||||||
|
|
||||||
|
# Various constants
|
||||||
|
REASONABLY_LARGE = 32768 # Minimal amount we pass the rle-coder
|
||||||
|
LINELEN = 64
|
||||||
|
RUNCHAR = b"\x90"
|
||||||
|
|
||||||
|
#
|
||||||
|
# This code is no longer byte-order dependent
|
||||||
|
|
||||||
|
|
||||||
|
class FInfo:
|
||||||
|
def __init__(self):
|
||||||
|
self.Type = '????'
|
||||||
|
self.Creator = '????'
|
||||||
|
self.Flags = 0
|
||||||
|
|
||||||
|
def getfileinfo(name):
|
||||||
|
finfo = FInfo()
|
||||||
|
with io.open(name, 'rb') as fp:
|
||||||
|
# Quick check for textfile
|
||||||
|
data = fp.read(512)
|
||||||
|
if 0 not in data:
|
||||||
|
finfo.Type = 'TEXT'
|
||||||
|
fp.seek(0, 2)
|
||||||
|
dsize = fp.tell()
|
||||||
|
dir, file = os.path.split(name)
|
||||||
|
file = file.replace(':', '-', 1)
|
||||||
|
return file, finfo, dsize, 0
|
||||||
|
|
||||||
|
class openrsrc:
|
||||||
|
def __init__(self, *args):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def read(self, *args):
|
||||||
|
return b''
|
||||||
|
|
||||||
|
def write(self, *args):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class _Hqxcoderengine:
|
||||||
|
"""Write data to the coder in 3-byte chunks"""
|
||||||
|
|
||||||
|
def __init__(self, ofp):
|
||||||
|
self.ofp = ofp
|
||||||
|
self.data = b''
|
||||||
|
self.hqxdata = b''
|
||||||
|
self.linelen = LINELEN - 1
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
self.data = self.data + data
|
||||||
|
datalen = len(self.data)
|
||||||
|
todo = (datalen // 3) * 3
|
||||||
|
data = self.data[:todo]
|
||||||
|
self.data = self.data[todo:]
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
self.hqxdata = self.hqxdata + binascii.b2a_hqx(data)
|
||||||
|
self._flush(0)
|
||||||
|
|
||||||
|
def _flush(self, force):
|
||||||
|
first = 0
|
||||||
|
while first <= len(self.hqxdata) - self.linelen:
|
||||||
|
last = first + self.linelen
|
||||||
|
self.ofp.write(self.hqxdata[first:last] + b'\n')
|
||||||
|
self.linelen = LINELEN
|
||||||
|
first = last
|
||||||
|
self.hqxdata = self.hqxdata[first:]
|
||||||
|
if force:
|
||||||
|
self.ofp.write(self.hqxdata + b':\n')
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.data:
|
||||||
|
self.hqxdata = self.hqxdata + binascii.b2a_hqx(self.data)
|
||||||
|
self._flush(1)
|
||||||
|
self.ofp.close()
|
||||||
|
del self.ofp
|
||||||
|
|
||||||
|
class _Rlecoderengine:
|
||||||
|
"""Write data to the RLE-coder in suitably large chunks"""
|
||||||
|
|
||||||
|
def __init__(self, ofp):
|
||||||
|
self.ofp = ofp
|
||||||
|
self.data = b''
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
self.data = self.data + data
|
||||||
|
if len(self.data) < REASONABLY_LARGE:
|
||||||
|
return
|
||||||
|
rledata = binascii.rlecode_hqx(self.data)
|
||||||
|
self.ofp.write(rledata)
|
||||||
|
self.data = b''
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.data:
|
||||||
|
rledata = binascii.rlecode_hqx(self.data)
|
||||||
|
self.ofp.write(rledata)
|
||||||
|
self.ofp.close()
|
||||||
|
del self.ofp
|
||||||
|
|
||||||
|
class BinHex:
|
||||||
|
def __init__(self, name_finfo_dlen_rlen, ofp):
|
||||||
|
name, finfo, dlen, rlen = name_finfo_dlen_rlen
|
||||||
|
close_on_error = False
|
||||||
|
if isinstance(ofp, str):
|
||||||
|
ofname = ofp
|
||||||
|
ofp = io.open(ofname, 'wb')
|
||||||
|
close_on_error = True
|
||||||
|
try:
|
||||||
|
ofp.write(b'(This file must be converted with BinHex 4.0)\r\r:')
|
||||||
|
hqxer = _Hqxcoderengine(ofp)
|
||||||
|
self.ofp = _Rlecoderengine(hqxer)
|
||||||
|
self.crc = 0
|
||||||
|
if finfo is None:
|
||||||
|
finfo = FInfo()
|
||||||
|
self.dlen = dlen
|
||||||
|
self.rlen = rlen
|
||||||
|
self._writeinfo(name, finfo)
|
||||||
|
self.state = _DID_HEADER
|
||||||
|
except:
|
||||||
|
if close_on_error:
|
||||||
|
ofp.close()
|
||||||
|
raise
|
||||||
|
|
||||||
|
def _writeinfo(self, name, finfo):
|
||||||
|
nl = len(name)
|
||||||
|
if nl > 63:
|
||||||
|
raise Error('Filename too long')
|
||||||
|
d = bytes([nl]) + name.encode("latin-1") + b'\0'
|
||||||
|
tp, cr = finfo.Type, finfo.Creator
|
||||||
|
if isinstance(tp, str):
|
||||||
|
tp = tp.encode("latin-1")
|
||||||
|
if isinstance(cr, str):
|
||||||
|
cr = cr.encode("latin-1")
|
||||||
|
d2 = tp + cr
|
||||||
|
|
||||||
|
# Force all structs to be packed with big-endian
|
||||||
|
d3 = struct.pack('>h', finfo.Flags)
|
||||||
|
d4 = struct.pack('>ii', self.dlen, self.rlen)
|
||||||
|
info = d + d2 + d3 + d4
|
||||||
|
self._write(info)
|
||||||
|
self._writecrc()
|
||||||
|
|
||||||
|
def _write(self, data):
|
||||||
|
self.crc = binascii.crc_hqx(data, self.crc)
|
||||||
|
self.ofp.write(data)
|
||||||
|
|
||||||
|
def _writecrc(self):
|
||||||
|
# XXXX Should this be here??
|
||||||
|
# self.crc = binascii.crc_hqx('\0\0', self.crc)
|
||||||
|
if self.crc < 0:
|
||||||
|
fmt = '>h'
|
||||||
|
else:
|
||||||
|
fmt = '>H'
|
||||||
|
self.ofp.write(struct.pack(fmt, self.crc))
|
||||||
|
self.crc = 0
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
if self.state != _DID_HEADER:
|
||||||
|
raise Error('Writing data at the wrong time')
|
||||||
|
self.dlen = self.dlen - len(data)
|
||||||
|
self._write(data)
|
||||||
|
|
||||||
|
def close_data(self):
|
||||||
|
if self.dlen != 0:
|
||||||
|
raise Error('Incorrect data size, diff=%r' % (self.rlen,))
|
||||||
|
self._writecrc()
|
||||||
|
self.state = _DID_DATA
|
||||||
|
|
||||||
|
def write_rsrc(self, data):
|
||||||
|
if self.state < _DID_DATA:
|
||||||
|
self.close_data()
|
||||||
|
if self.state != _DID_DATA:
|
||||||
|
raise Error('Writing resource data at the wrong time')
|
||||||
|
self.rlen = self.rlen - len(data)
|
||||||
|
self._write(data)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.state is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
if self.state < _DID_DATA:
|
||||||
|
self.close_data()
|
||||||
|
if self.state != _DID_DATA:
|
||||||
|
raise Error('Close at the wrong time')
|
||||||
|
if self.rlen != 0:
|
||||||
|
raise Error("Incorrect resource-datasize, diff=%r" % (self.rlen,))
|
||||||
|
self._writecrc()
|
||||||
|
finally:
|
||||||
|
self.state = None
|
||||||
|
ofp = self.ofp
|
||||||
|
del self.ofp
|
||||||
|
ofp.close()
|
||||||
|
|
||||||
|
def binhex(inp, out):
|
||||||
|
"""binhex(infilename, outfilename): create binhex-encoded copy of a file"""
|
||||||
|
finfo = getfileinfo(inp)
|
||||||
|
ofp = BinHex(finfo, out)
|
||||||
|
|
||||||
|
with io.open(inp, 'rb') as ifp:
|
||||||
|
# XXXX Do textfile translation on non-mac systems
|
||||||
|
while True:
|
||||||
|
d = ifp.read(128000)
|
||||||
|
if not d: break
|
||||||
|
ofp.write(d)
|
||||||
|
ofp.close_data()
|
||||||
|
|
||||||
|
ifp = openrsrc(inp, 'rb')
|
||||||
|
while True:
|
||||||
|
d = ifp.read(128000)
|
||||||
|
if not d: break
|
||||||
|
ofp.write_rsrc(d)
|
||||||
|
ofp.close()
|
||||||
|
ifp.close()
|
||||||
|
|
||||||
|
class _Hqxdecoderengine:
|
||||||
|
"""Read data via the decoder in 4-byte chunks"""
|
||||||
|
|
||||||
|
def __init__(self, ifp):
|
||||||
|
self.ifp = ifp
|
||||||
|
self.eof = 0
|
||||||
|
|
||||||
|
def read(self, totalwtd):
|
||||||
|
"""Read at least wtd bytes (or until EOF)"""
|
||||||
|
decdata = b''
|
||||||
|
wtd = totalwtd
|
||||||
|
#
|
||||||
|
# The loop here is convoluted, since we don't really now how
|
||||||
|
# much to decode: there may be newlines in the incoming data.
|
||||||
|
while wtd > 0:
|
||||||
|
if self.eof: return decdata
|
||||||
|
wtd = ((wtd + 2) // 3) * 4
|
||||||
|
data = self.ifp.read(wtd)
|
||||||
|
#
|
||||||
|
# Next problem: there may not be a complete number of
|
||||||
|
# bytes in what we pass to a2b. Solve by yet another
|
||||||
|
# loop.
|
||||||
|
#
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
decdatacur, self.eof = binascii.a2b_hqx(data)
|
||||||
|
break
|
||||||
|
except binascii.Incomplete:
|
||||||
|
pass
|
||||||
|
newdata = self.ifp.read(1)
|
||||||
|
if not newdata:
|
||||||
|
raise Error('Premature EOF on binhex file')
|
||||||
|
data = data + newdata
|
||||||
|
decdata = decdata + decdatacur
|
||||||
|
wtd = totalwtd - len(decdata)
|
||||||
|
if not decdata and not self.eof:
|
||||||
|
raise Error('Premature EOF on binhex file')
|
||||||
|
return decdata
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.ifp.close()
|
||||||
|
|
||||||
|
class _Rledecoderengine:
|
||||||
|
"""Read data via the RLE-coder"""
|
||||||
|
|
||||||
|
def __init__(self, ifp):
|
||||||
|
self.ifp = ifp
|
||||||
|
self.pre_buffer = b''
|
||||||
|
self.post_buffer = b''
|
||||||
|
self.eof = 0
|
||||||
|
|
||||||
|
def read(self, wtd):
|
||||||
|
if wtd > len(self.post_buffer):
|
||||||
|
self._fill(wtd - len(self.post_buffer))
|
||||||
|
rv = self.post_buffer[:wtd]
|
||||||
|
self.post_buffer = self.post_buffer[wtd:]
|
||||||
|
return rv
|
||||||
|
|
||||||
|
def _fill(self, wtd):
|
||||||
|
self.pre_buffer = self.pre_buffer + self.ifp.read(wtd + 4)
|
||||||
|
if self.ifp.eof:
|
||||||
|
self.post_buffer = self.post_buffer + \
|
||||||
|
binascii.rledecode_hqx(self.pre_buffer)
|
||||||
|
self.pre_buffer = b''
|
||||||
|
return
|
||||||
|
|
||||||
|
#
|
||||||
|
# Obfuscated code ahead. We have to take care that we don't
|
||||||
|
# end up with an orphaned RUNCHAR later on. So, we keep a couple
|
||||||
|
# of bytes in the buffer, depending on what the end of
|
||||||
|
# the buffer looks like:
|
||||||
|
# '\220\0\220' - Keep 3 bytes: repeated \220 (escaped as \220\0)
|
||||||
|
# '?\220' - Keep 2 bytes: repeated something-else
|
||||||
|
# '\220\0' - Escaped \220: Keep 2 bytes.
|
||||||
|
# '?\220?' - Complete repeat sequence: decode all
|
||||||
|
# otherwise: keep 1 byte.
|
||||||
|
#
|
||||||
|
mark = len(self.pre_buffer)
|
||||||
|
if self.pre_buffer[-3:] == RUNCHAR + b'\0' + RUNCHAR:
|
||||||
|
mark = mark - 3
|
||||||
|
elif self.pre_buffer[-1:] == RUNCHAR:
|
||||||
|
mark = mark - 2
|
||||||
|
elif self.pre_buffer[-2:] == RUNCHAR + b'\0':
|
||||||
|
mark = mark - 2
|
||||||
|
elif self.pre_buffer[-2:-1] == RUNCHAR:
|
||||||
|
pass # Decode all
|
||||||
|
else:
|
||||||
|
mark = mark - 1
|
||||||
|
|
||||||
|
self.post_buffer = self.post_buffer + \
|
||||||
|
binascii.rledecode_hqx(self.pre_buffer[:mark])
|
||||||
|
self.pre_buffer = self.pre_buffer[mark:]
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.ifp.close()
|
||||||
|
|
||||||
|
class HexBin:
|
||||||
|
def __init__(self, ifp):
|
||||||
|
if isinstance(ifp, str):
|
||||||
|
ifp = io.open(ifp, 'rb')
|
||||||
|
#
|
||||||
|
# Find initial colon.
|
||||||
|
#
|
||||||
|
while True:
|
||||||
|
ch = ifp.read(1)
|
||||||
|
if not ch:
|
||||||
|
raise Error("No binhex data found")
|
||||||
|
# Cater for \r\n terminated lines (which show up as \n\r, hence
|
||||||
|
# all lines start with \r)
|
||||||
|
if ch == b'\r':
|
||||||
|
continue
|
||||||
|
if ch == b':':
|
||||||
|
break
|
||||||
|
|
||||||
|
hqxifp = _Hqxdecoderengine(ifp)
|
||||||
|
self.ifp = _Rledecoderengine(hqxifp)
|
||||||
|
self.crc = 0
|
||||||
|
self._readheader()
|
||||||
|
|
||||||
|
def _read(self, len):
|
||||||
|
data = self.ifp.read(len)
|
||||||
|
self.crc = binascii.crc_hqx(data, self.crc)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def _checkcrc(self):
|
||||||
|
filecrc = struct.unpack('>h', self.ifp.read(2))[0] & 0xffff
|
||||||
|
#self.crc = binascii.crc_hqx('\0\0', self.crc)
|
||||||
|
# XXXX Is this needed??
|
||||||
|
self.crc = self.crc & 0xffff
|
||||||
|
if filecrc != self.crc:
|
||||||
|
raise Error('CRC error, computed %x, read %x'
|
||||||
|
% (self.crc, filecrc))
|
||||||
|
self.crc = 0
|
||||||
|
|
||||||
|
def _readheader(self):
|
||||||
|
len = self._read(1)
|
||||||
|
fname = self._read(ord(len))
|
||||||
|
rest = self._read(1 + 4 + 4 + 2 + 4 + 4)
|
||||||
|
self._checkcrc()
|
||||||
|
|
||||||
|
type = rest[1:5]
|
||||||
|
creator = rest[5:9]
|
||||||
|
flags = struct.unpack('>h', rest[9:11])[0]
|
||||||
|
self.dlen = struct.unpack('>l', rest[11:15])[0]
|
||||||
|
self.rlen = struct.unpack('>l', rest[15:19])[0]
|
||||||
|
|
||||||
|
self.FName = fname
|
||||||
|
self.FInfo = FInfo()
|
||||||
|
self.FInfo.Creator = creator
|
||||||
|
self.FInfo.Type = type
|
||||||
|
self.FInfo.Flags = flags
|
||||||
|
|
||||||
|
self.state = _DID_HEADER
|
||||||
|
|
||||||
|
def read(self, *n):
|
||||||
|
if self.state != _DID_HEADER:
|
||||||
|
raise Error('Read data at wrong time')
|
||||||
|
if n:
|
||||||
|
n = n[0]
|
||||||
|
n = min(n, self.dlen)
|
||||||
|
else:
|
||||||
|
n = self.dlen
|
||||||
|
rv = b''
|
||||||
|
while len(rv) < n:
|
||||||
|
rv = rv + self._read(n-len(rv))
|
||||||
|
self.dlen = self.dlen - n
|
||||||
|
return rv
|
||||||
|
|
||||||
|
def close_data(self):
|
||||||
|
if self.state != _DID_HEADER:
|
||||||
|
raise Error('close_data at wrong time')
|
||||||
|
if self.dlen:
|
||||||
|
dummy = self._read(self.dlen)
|
||||||
|
self._checkcrc()
|
||||||
|
self.state = _DID_DATA
|
||||||
|
|
||||||
|
def read_rsrc(self, *n):
|
||||||
|
if self.state == _DID_HEADER:
|
||||||
|
self.close_data()
|
||||||
|
if self.state != _DID_DATA:
|
||||||
|
raise Error('Read resource data at wrong time')
|
||||||
|
if n:
|
||||||
|
n = n[0]
|
||||||
|
n = min(n, self.rlen)
|
||||||
|
else:
|
||||||
|
n = self.rlen
|
||||||
|
self.rlen = self.rlen - n
|
||||||
|
return self._read(n)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if self.state is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
if self.rlen:
|
||||||
|
dummy = self.read_rsrc(self.rlen)
|
||||||
|
self._checkcrc()
|
||||||
|
finally:
|
||||||
|
self.state = None
|
||||||
|
self.ifp.close()
|
||||||
|
|
||||||
|
def hexbin(inp, out):
|
||||||
|
"""hexbin(infilename, outfilename) - Decode binhexed file"""
|
||||||
|
ifp = HexBin(inp)
|
||||||
|
finfo = ifp.FInfo
|
||||||
|
if not out:
|
||||||
|
out = ifp.FName
|
||||||
|
|
||||||
|
with io.open(out, 'wb') as ofp:
|
||||||
|
# XXXX Do translation on non-mac systems
|
||||||
|
while True:
|
||||||
|
d = ifp.read(128000)
|
||||||
|
if not d: break
|
||||||
|
ofp.write(d)
|
||||||
|
ifp.close_data()
|
||||||
|
|
||||||
|
d = ifp.read_rsrc(128000)
|
||||||
|
if d:
|
||||||
|
ofp = openrsrc(out, 'wb')
|
||||||
|
ofp.write(d)
|
||||||
|
while True:
|
||||||
|
d = ifp.read_rsrc(128000)
|
||||||
|
if not d: break
|
||||||
|
ofp.write(d)
|
||||||
|
ofp.close()
|
||||||
|
|
||||||
|
ifp.close()
|
92
Lib/bisect.py
Normal file
92
Lib/bisect.py
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
"""Bisection algorithms."""
|
||||||
|
|
||||||
|
def insort_right(a, x, lo=0, hi=None):
|
||||||
|
"""Insert item x in list a, and keep it sorted assuming a is sorted.
|
||||||
|
|
||||||
|
If x is already in a, insert it to the right of the rightmost x.
|
||||||
|
|
||||||
|
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||||
|
slice of a to be searched.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if lo < 0:
|
||||||
|
raise ValueError('lo must be non-negative')
|
||||||
|
if hi is None:
|
||||||
|
hi = len(a)
|
||||||
|
while lo < hi:
|
||||||
|
mid = (lo+hi)//2
|
||||||
|
if x < a[mid]: hi = mid
|
||||||
|
else: lo = mid+1
|
||||||
|
a.insert(lo, x)
|
||||||
|
|
||||||
|
def bisect_right(a, x, lo=0, hi=None):
|
||||||
|
"""Return the index where to insert item x in list a, assuming a is sorted.
|
||||||
|
|
||||||
|
The return value i is such that all e in a[:i] have e <= x, and all e in
|
||||||
|
a[i:] have e > x. So if x already appears in the list, a.insert(x) will
|
||||||
|
insert just after the rightmost x already there.
|
||||||
|
|
||||||
|
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||||
|
slice of a to be searched.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if lo < 0:
|
||||||
|
raise ValueError('lo must be non-negative')
|
||||||
|
if hi is None:
|
||||||
|
hi = len(a)
|
||||||
|
while lo < hi:
|
||||||
|
mid = (lo+hi)//2
|
||||||
|
if x < a[mid]: hi = mid
|
||||||
|
else: lo = mid+1
|
||||||
|
return lo
|
||||||
|
|
||||||
|
def insort_left(a, x, lo=0, hi=None):
|
||||||
|
"""Insert item x in list a, and keep it sorted assuming a is sorted.
|
||||||
|
|
||||||
|
If x is already in a, insert it to the left of the leftmost x.
|
||||||
|
|
||||||
|
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||||
|
slice of a to be searched.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if lo < 0:
|
||||||
|
raise ValueError('lo must be non-negative')
|
||||||
|
if hi is None:
|
||||||
|
hi = len(a)
|
||||||
|
while lo < hi:
|
||||||
|
mid = (lo+hi)//2
|
||||||
|
if a[mid] < x: lo = mid+1
|
||||||
|
else: hi = mid
|
||||||
|
a.insert(lo, x)
|
||||||
|
|
||||||
|
|
||||||
|
def bisect_left(a, x, lo=0, hi=None):
|
||||||
|
"""Return the index where to insert item x in list a, assuming a is sorted.
|
||||||
|
|
||||||
|
The return value i is such that all e in a[:i] have e < x, and all e in
|
||||||
|
a[i:] have e >= x. So if x already appears in the list, a.insert(x) will
|
||||||
|
insert just before the leftmost x already there.
|
||||||
|
|
||||||
|
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||||
|
slice of a to be searched.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if lo < 0:
|
||||||
|
raise ValueError('lo must be non-negative')
|
||||||
|
if hi is None:
|
||||||
|
hi = len(a)
|
||||||
|
while lo < hi:
|
||||||
|
mid = (lo+hi)//2
|
||||||
|
if a[mid] < x: lo = mid+1
|
||||||
|
else: hi = mid
|
||||||
|
return lo
|
||||||
|
|
||||||
|
# Overwrite above definitions with a fast C implementation
|
||||||
|
try:
|
||||||
|
from _bisect import *
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Create aliases
|
||||||
|
bisect = bisect_right
|
||||||
|
insort = insort_right
|
357
Lib/bz2.py
Normal file
357
Lib/bz2.py
Normal file
|
@ -0,0 +1,357 @@
|
||||||
|
"""Interface to the libbzip2 compression library.
|
||||||
|
|
||||||
|
This module provides a file interface, classes for incremental
|
||||||
|
(de)compression, and functions for one-shot (de)compression.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__all__ = ["BZ2File", "BZ2Compressor", "BZ2Decompressor",
|
||||||
|
"open", "compress", "decompress"]
|
||||||
|
|
||||||
|
__author__ = "Nadeem Vawda <nadeem.vawda@gmail.com>"
|
||||||
|
|
||||||
|
from builtins import open as _builtin_open
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import warnings
|
||||||
|
import _compression
|
||||||
|
from threading import RLock
|
||||||
|
|
||||||
|
from _bz2 import BZ2Compressor, BZ2Decompressor
|
||||||
|
|
||||||
|
|
||||||
|
_MODE_CLOSED = 0
|
||||||
|
_MODE_READ = 1
|
||||||
|
# Value 2 no longer used
|
||||||
|
_MODE_WRITE = 3
|
||||||
|
|
||||||
|
|
||||||
|
class BZ2File(_compression.BaseStream):
|
||||||
|
|
||||||
|
"""A file object providing transparent bzip2 (de)compression.
|
||||||
|
|
||||||
|
A BZ2File can act as a wrapper for an existing file object, or refer
|
||||||
|
directly to a named file on disk.
|
||||||
|
|
||||||
|
Note that BZ2File provides a *binary* file interface - data read is
|
||||||
|
returned as bytes, and data to be written should be given as bytes.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, filename, mode="r", buffering=None, compresslevel=9):
|
||||||
|
"""Open a bzip2-compressed file.
|
||||||
|
|
||||||
|
If filename is a str, bytes, or PathLike object, it gives the
|
||||||
|
name of the file to be opened. Otherwise, it should be a file
|
||||||
|
object, which will be used to read or write the compressed data.
|
||||||
|
|
||||||
|
mode can be 'r' for reading (default), 'w' for (over)writing,
|
||||||
|
'x' for creating exclusively, or 'a' for appending. These can
|
||||||
|
equivalently be given as 'rb', 'wb', 'xb', and 'ab'.
|
||||||
|
|
||||||
|
buffering is ignored. Its use is deprecated.
|
||||||
|
|
||||||
|
If mode is 'w', 'x' or 'a', compresslevel can be a number between 1
|
||||||
|
and 9 specifying the level of compression: 1 produces the least
|
||||||
|
compression, and 9 (default) produces the most compression.
|
||||||
|
|
||||||
|
If mode is 'r', the input file may be the concatenation of
|
||||||
|
multiple compressed streams.
|
||||||
|
"""
|
||||||
|
# This lock must be recursive, so that BufferedIOBase's
|
||||||
|
# writelines() does not deadlock.
|
||||||
|
self._lock = RLock()
|
||||||
|
self._fp = None
|
||||||
|
self._closefp = False
|
||||||
|
self._mode = _MODE_CLOSED
|
||||||
|
|
||||||
|
if buffering is not None:
|
||||||
|
warnings.warn("Use of 'buffering' argument is deprecated",
|
||||||
|
DeprecationWarning)
|
||||||
|
|
||||||
|
if not (1 <= compresslevel <= 9):
|
||||||
|
raise ValueError("compresslevel must be between 1 and 9")
|
||||||
|
|
||||||
|
if mode in ("", "r", "rb"):
|
||||||
|
mode = "rb"
|
||||||
|
mode_code = _MODE_READ
|
||||||
|
elif mode in ("w", "wb"):
|
||||||
|
mode = "wb"
|
||||||
|
mode_code = _MODE_WRITE
|
||||||
|
self._compressor = BZ2Compressor(compresslevel)
|
||||||
|
elif mode in ("x", "xb"):
|
||||||
|
mode = "xb"
|
||||||
|
mode_code = _MODE_WRITE
|
||||||
|
self._compressor = BZ2Compressor(compresslevel)
|
||||||
|
elif mode in ("a", "ab"):
|
||||||
|
mode = "ab"
|
||||||
|
mode_code = _MODE_WRITE
|
||||||
|
self._compressor = BZ2Compressor(compresslevel)
|
||||||
|
else:
|
||||||
|
raise ValueError("Invalid mode: %r" % (mode,))
|
||||||
|
|
||||||
|
if isinstance(filename, (str, bytes, os.PathLike)):
|
||||||
|
self._fp = _builtin_open(filename, mode)
|
||||||
|
self._closefp = True
|
||||||
|
self._mode = mode_code
|
||||||
|
elif hasattr(filename, "read") or hasattr(filename, "write"):
|
||||||
|
self._fp = filename
|
||||||
|
self._mode = mode_code
|
||||||
|
else:
|
||||||
|
raise TypeError("filename must be a str, bytes, file or PathLike object")
|
||||||
|
|
||||||
|
if self._mode == _MODE_READ:
|
||||||
|
raw = _compression.DecompressReader(self._fp,
|
||||||
|
BZ2Decompressor, trailing_error=OSError)
|
||||||
|
self._buffer = io.BufferedReader(raw)
|
||||||
|
else:
|
||||||
|
self._pos = 0
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""Flush and close the file.
|
||||||
|
|
||||||
|
May be called more than once without error. Once the file is
|
||||||
|
closed, any other operation on it will raise a ValueError.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
if self._mode == _MODE_CLOSED:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
if self._mode == _MODE_READ:
|
||||||
|
self._buffer.close()
|
||||||
|
elif self._mode == _MODE_WRITE:
|
||||||
|
self._fp.write(self._compressor.flush())
|
||||||
|
self._compressor = None
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
if self._closefp:
|
||||||
|
self._fp.close()
|
||||||
|
finally:
|
||||||
|
self._fp = None
|
||||||
|
self._closefp = False
|
||||||
|
self._mode = _MODE_CLOSED
|
||||||
|
self._buffer = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def closed(self):
|
||||||
|
"""True if this file is closed."""
|
||||||
|
return self._mode == _MODE_CLOSED
|
||||||
|
|
||||||
|
def fileno(self):
|
||||||
|
"""Return the file descriptor for the underlying file."""
|
||||||
|
self._check_not_closed()
|
||||||
|
return self._fp.fileno()
|
||||||
|
|
||||||
|
def seekable(self):
|
||||||
|
"""Return whether the file supports seeking."""
|
||||||
|
return self.readable() and self._buffer.seekable()
|
||||||
|
|
||||||
|
def readable(self):
|
||||||
|
"""Return whether the file was opened for reading."""
|
||||||
|
self._check_not_closed()
|
||||||
|
return self._mode == _MODE_READ
|
||||||
|
|
||||||
|
def writable(self):
|
||||||
|
"""Return whether the file was opened for writing."""
|
||||||
|
self._check_not_closed()
|
||||||
|
return self._mode == _MODE_WRITE
|
||||||
|
|
||||||
|
def peek(self, n=0):
|
||||||
|
"""Return buffered data without advancing the file position.
|
||||||
|
|
||||||
|
Always returns at least one byte of data, unless at EOF.
|
||||||
|
The exact number of bytes returned is unspecified.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
# Relies on the undocumented fact that BufferedReader.peek()
|
||||||
|
# always returns at least one byte (except at EOF), independent
|
||||||
|
# of the value of n
|
||||||
|
return self._buffer.peek(n)
|
||||||
|
|
||||||
|
def read(self, size=-1):
|
||||||
|
"""Read up to size uncompressed bytes from the file.
|
||||||
|
|
||||||
|
If size is negative or omitted, read until EOF is reached.
|
||||||
|
Returns b'' if the file is already at EOF.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
return self._buffer.read(size)
|
||||||
|
|
||||||
|
def read1(self, size=-1):
|
||||||
|
"""Read up to size uncompressed bytes, while trying to avoid
|
||||||
|
making multiple reads from the underlying stream. Reads up to a
|
||||||
|
buffer's worth of data if size is negative.
|
||||||
|
|
||||||
|
Returns b'' if the file is at EOF.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
if size < 0:
|
||||||
|
size = io.DEFAULT_BUFFER_SIZE
|
||||||
|
return self._buffer.read1(size)
|
||||||
|
|
||||||
|
def readinto(self, b):
|
||||||
|
"""Read bytes into b.
|
||||||
|
|
||||||
|
Returns the number of bytes read (0 for EOF).
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
return self._buffer.readinto(b)
|
||||||
|
|
||||||
|
def readline(self, size=-1):
|
||||||
|
"""Read a line of uncompressed bytes from the file.
|
||||||
|
|
||||||
|
The terminating newline (if present) is retained. If size is
|
||||||
|
non-negative, no more than size bytes will be read (in which
|
||||||
|
case the line may be incomplete). Returns b'' if already at EOF.
|
||||||
|
"""
|
||||||
|
if not isinstance(size, int):
|
||||||
|
if not hasattr(size, "__index__"):
|
||||||
|
raise TypeError("Integer argument expected")
|
||||||
|
size = size.__index__()
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
return self._buffer.readline(size)
|
||||||
|
|
||||||
|
def readlines(self, size=-1):
|
||||||
|
"""Read a list of lines of uncompressed bytes from the file.
|
||||||
|
|
||||||
|
size can be specified to control the number of lines read: no
|
||||||
|
further lines will be read once the total size of the lines read
|
||||||
|
so far equals or exceeds size.
|
||||||
|
"""
|
||||||
|
if not isinstance(size, int):
|
||||||
|
if not hasattr(size, "__index__"):
|
||||||
|
raise TypeError("Integer argument expected")
|
||||||
|
size = size.__index__()
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_read()
|
||||||
|
return self._buffer.readlines(size)
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
"""Write a byte string to the file.
|
||||||
|
|
||||||
|
Returns the number of uncompressed bytes written, which is
|
||||||
|
always len(data). Note that due to buffering, the file on disk
|
||||||
|
may not reflect the data written until close() is called.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_write()
|
||||||
|
compressed = self._compressor.compress(data)
|
||||||
|
self._fp.write(compressed)
|
||||||
|
self._pos += len(data)
|
||||||
|
return len(data)
|
||||||
|
|
||||||
|
def writelines(self, seq):
|
||||||
|
"""Write a sequence of byte strings to the file.
|
||||||
|
|
||||||
|
Returns the number of uncompressed bytes written.
|
||||||
|
seq can be any iterable yielding byte strings.
|
||||||
|
|
||||||
|
Line separators are not added between the written byte strings.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
return _compression.BaseStream.writelines(self, seq)
|
||||||
|
|
||||||
|
def seek(self, offset, whence=io.SEEK_SET):
|
||||||
|
"""Change the file position.
|
||||||
|
|
||||||
|
The new position is specified by offset, relative to the
|
||||||
|
position indicated by whence. Values for whence are:
|
||||||
|
|
||||||
|
0: start of stream (default); offset must not be negative
|
||||||
|
1: current stream position
|
||||||
|
2: end of stream; offset must not be positive
|
||||||
|
|
||||||
|
Returns the new file position.
|
||||||
|
|
||||||
|
Note that seeking is emulated, so depending on the parameters,
|
||||||
|
this operation may be extremely slow.
|
||||||
|
"""
|
||||||
|
with self._lock:
|
||||||
|
self._check_can_seek()
|
||||||
|
return self._buffer.seek(offset, whence)
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
"""Return the current file position."""
|
||||||
|
with self._lock:
|
||||||
|
self._check_not_closed()
|
||||||
|
if self._mode == _MODE_READ:
|
||||||
|
return self._buffer.tell()
|
||||||
|
return self._pos
|
||||||
|
|
||||||
|
|
||||||
|
def open(filename, mode="rb", compresslevel=9,
|
||||||
|
encoding=None, errors=None, newline=None):
|
||||||
|
"""Open a bzip2-compressed file in binary or text mode.
|
||||||
|
|
||||||
|
The filename argument can be an actual filename (a str, bytes, or
|
||||||
|
PathLike object), or an existing file object to read from or write
|
||||||
|
to.
|
||||||
|
|
||||||
|
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or
|
||||||
|
"ab" for binary mode, or "rt", "wt", "xt" or "at" for text mode.
|
||||||
|
The default mode is "rb", and the default compresslevel is 9.
|
||||||
|
|
||||||
|
For binary mode, this function is equivalent to the BZ2File
|
||||||
|
constructor: BZ2File(filename, mode, compresslevel). In this case,
|
||||||
|
the encoding, errors and newline arguments must not be provided.
|
||||||
|
|
||||||
|
For text mode, a BZ2File object is created, and wrapped in an
|
||||||
|
io.TextIOWrapper instance with the specified encoding, error
|
||||||
|
handling behavior, and line ending(s).
|
||||||
|
|
||||||
|
"""
|
||||||
|
if "t" in mode:
|
||||||
|
if "b" in mode:
|
||||||
|
raise ValueError("Invalid mode: %r" % (mode,))
|
||||||
|
else:
|
||||||
|
if encoding is not None:
|
||||||
|
raise ValueError("Argument 'encoding' not supported in binary mode")
|
||||||
|
if errors is not None:
|
||||||
|
raise ValueError("Argument 'errors' not supported in binary mode")
|
||||||
|
if newline is not None:
|
||||||
|
raise ValueError("Argument 'newline' not supported in binary mode")
|
||||||
|
|
||||||
|
bz_mode = mode.replace("t", "")
|
||||||
|
binary_file = BZ2File(filename, bz_mode, compresslevel=compresslevel)
|
||||||
|
|
||||||
|
if "t" in mode:
|
||||||
|
return io.TextIOWrapper(binary_file, encoding, errors, newline)
|
||||||
|
else:
|
||||||
|
return binary_file
|
||||||
|
|
||||||
|
|
||||||
|
def compress(data, compresslevel=9):
|
||||||
|
"""Compress a block of data.
|
||||||
|
|
||||||
|
compresslevel, if given, must be a number between 1 and 9.
|
||||||
|
|
||||||
|
For incremental compression, use a BZ2Compressor object instead.
|
||||||
|
"""
|
||||||
|
comp = BZ2Compressor(compresslevel)
|
||||||
|
return comp.compress(data) + comp.flush()
|
||||||
|
|
||||||
|
|
||||||
|
def decompress(data):
|
||||||
|
"""Decompress a block of data.
|
||||||
|
|
||||||
|
For incremental decompression, use a BZ2Decompressor object instead.
|
||||||
|
"""
|
||||||
|
results = []
|
||||||
|
while data:
|
||||||
|
decomp = BZ2Decompressor()
|
||||||
|
try:
|
||||||
|
res = decomp.decompress(data)
|
||||||
|
except OSError:
|
||||||
|
if results:
|
||||||
|
break # Leftover data is not a valid bzip2 stream; ignore it.
|
||||||
|
else:
|
||||||
|
raise # Error on the first iteration; bail out.
|
||||||
|
results.append(res)
|
||||||
|
if not decomp.eof:
|
||||||
|
raise ValueError("Compressed data ended before the "
|
||||||
|
"end-of-stream marker was reached")
|
||||||
|
data = decomp.unused_data
|
||||||
|
return b"".join(results)
|
173
Lib/cProfile.py
Normal file
173
Lib/cProfile.py
Normal file
|
@ -0,0 +1,173 @@
|
||||||
|
#! /usr/bin/env python3
|
||||||
|
|
||||||
|
"""Python interface for the 'lsprof' profiler.
|
||||||
|
Compatible with the 'profile' module.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__all__ = ["run", "runctx", "Profile"]
|
||||||
|
|
||||||
|
import _lsprof
|
||||||
|
import profile as _pyprofile
|
||||||
|
|
||||||
|
# ____________________________________________________________
|
||||||
|
# Simple interface
|
||||||
|
|
||||||
|
def run(statement, filename=None, sort=-1):
|
||||||
|
return _pyprofile._Utils(Profile).run(statement, filename, sort)
|
||||||
|
|
||||||
|
def runctx(statement, globals, locals, filename=None, sort=-1):
|
||||||
|
return _pyprofile._Utils(Profile).runctx(statement, globals, locals,
|
||||||
|
filename, sort)
|
||||||
|
|
||||||
|
run.__doc__ = _pyprofile.run.__doc__
|
||||||
|
runctx.__doc__ = _pyprofile.runctx.__doc__
|
||||||
|
|
||||||
|
# ____________________________________________________________
|
||||||
|
|
||||||
|
class Profile(_lsprof.Profiler):
|
||||||
|
"""Profile(timer=None, timeunit=None, subcalls=True, builtins=True)
|
||||||
|
|
||||||
|
Builds a profiler object using the specified timer function.
|
||||||
|
The default timer is a fast built-in one based on real time.
|
||||||
|
For custom timer functions returning integers, timeunit can
|
||||||
|
be a float specifying a scale (i.e. how long each integer unit
|
||||||
|
is, in seconds).
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Most of the functionality is in the base class.
|
||||||
|
# This subclass only adds convenient and backward-compatible methods.
|
||||||
|
|
||||||
|
def print_stats(self, sort=-1):
|
||||||
|
import pstats
|
||||||
|
pstats.Stats(self).strip_dirs().sort_stats(sort).print_stats()
|
||||||
|
|
||||||
|
def dump_stats(self, file):
|
||||||
|
import marshal
|
||||||
|
with open(file, 'wb') as f:
|
||||||
|
self.create_stats()
|
||||||
|
marshal.dump(self.stats, f)
|
||||||
|
|
||||||
|
def create_stats(self):
|
||||||
|
self.disable()
|
||||||
|
self.snapshot_stats()
|
||||||
|
|
||||||
|
def snapshot_stats(self):
|
||||||
|
entries = self.getstats()
|
||||||
|
self.stats = {}
|
||||||
|
callersdicts = {}
|
||||||
|
# call information
|
||||||
|
for entry in entries:
|
||||||
|
func = label(entry.code)
|
||||||
|
nc = entry.callcount # ncalls column of pstats (before '/')
|
||||||
|
cc = nc - entry.reccallcount # ncalls column of pstats (after '/')
|
||||||
|
tt = entry.inlinetime # tottime column of pstats
|
||||||
|
ct = entry.totaltime # cumtime column of pstats
|
||||||
|
callers = {}
|
||||||
|
callersdicts[id(entry.code)] = callers
|
||||||
|
self.stats[func] = cc, nc, tt, ct, callers
|
||||||
|
# subcall information
|
||||||
|
for entry in entries:
|
||||||
|
if entry.calls:
|
||||||
|
func = label(entry.code)
|
||||||
|
for subentry in entry.calls:
|
||||||
|
try:
|
||||||
|
callers = callersdicts[id(subentry.code)]
|
||||||
|
except KeyError:
|
||||||
|
continue
|
||||||
|
nc = subentry.callcount
|
||||||
|
cc = nc - subentry.reccallcount
|
||||||
|
tt = subentry.inlinetime
|
||||||
|
ct = subentry.totaltime
|
||||||
|
if func in callers:
|
||||||
|
prev = callers[func]
|
||||||
|
nc += prev[0]
|
||||||
|
cc += prev[1]
|
||||||
|
tt += prev[2]
|
||||||
|
ct += prev[3]
|
||||||
|
callers[func] = nc, cc, tt, ct
|
||||||
|
|
||||||
|
# The following two methods can be called by clients to use
|
||||||
|
# a profiler to profile a statement, given as a string.
|
||||||
|
|
||||||
|
def run(self, cmd):
|
||||||
|
import __main__
|
||||||
|
dict = __main__.__dict__
|
||||||
|
return self.runctx(cmd, dict, dict)
|
||||||
|
|
||||||
|
def runctx(self, cmd, globals, locals):
|
||||||
|
self.enable()
|
||||||
|
try:
|
||||||
|
exec(cmd, globals, locals)
|
||||||
|
finally:
|
||||||
|
self.disable()
|
||||||
|
return self
|
||||||
|
|
||||||
|
# This method is more useful to profile a single function call.
|
||||||
|
def runcall(self, func, *args, **kw):
|
||||||
|
self.enable()
|
||||||
|
try:
|
||||||
|
return func(*args, **kw)
|
||||||
|
finally:
|
||||||
|
self.disable()
|
||||||
|
|
||||||
|
# ____________________________________________________________
|
||||||
|
|
||||||
|
def label(code):
|
||||||
|
if isinstance(code, str):
|
||||||
|
return ('~', 0, code) # built-in functions ('~' sorts at the end)
|
||||||
|
else:
|
||||||
|
return (code.co_filename, code.co_firstlineno, code.co_name)
|
||||||
|
|
||||||
|
# ____________________________________________________________
|
||||||
|
|
||||||
|
def main():
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import runpy
|
||||||
|
import pstats
|
||||||
|
from optparse import OptionParser
|
||||||
|
usage = "cProfile.py [-o output_file_path] [-s sort] [-m module | scriptfile] [arg] ..."
|
||||||
|
parser = OptionParser(usage=usage)
|
||||||
|
parser.allow_interspersed_args = False
|
||||||
|
parser.add_option('-o', '--outfile', dest="outfile",
|
||||||
|
help="Save stats to <outfile>", default=None)
|
||||||
|
parser.add_option('-s', '--sort', dest="sort",
|
||||||
|
help="Sort order when printing to stdout, based on pstats.Stats class",
|
||||||
|
default=-1,
|
||||||
|
choices=sorted(pstats.Stats.sort_arg_dict_default))
|
||||||
|
parser.add_option('-m', dest="module", action="store_true",
|
||||||
|
help="Profile a library module", default=False)
|
||||||
|
|
||||||
|
if not sys.argv[1:]:
|
||||||
|
parser.print_usage()
|
||||||
|
sys.exit(2)
|
||||||
|
|
||||||
|
(options, args) = parser.parse_args()
|
||||||
|
sys.argv[:] = args
|
||||||
|
|
||||||
|
if len(args) > 0:
|
||||||
|
if options.module:
|
||||||
|
code = "run_module(modname, run_name='__main__')"
|
||||||
|
globs = {
|
||||||
|
'run_module': runpy.run_module,
|
||||||
|
'modname': args[0]
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
progname = args[0]
|
||||||
|
sys.path.insert(0, os.path.dirname(progname))
|
||||||
|
with open(progname, 'rb') as fp:
|
||||||
|
code = compile(fp.read(), progname, 'exec')
|
||||||
|
globs = {
|
||||||
|
'__file__': progname,
|
||||||
|
'__name__': '__main__',
|
||||||
|
'__package__': None,
|
||||||
|
'__cached__': None,
|
||||||
|
}
|
||||||
|
runctx(code, globs, None, options.outfile, options.sort)
|
||||||
|
else:
|
||||||
|
parser.print_usage()
|
||||||
|
return parser
|
||||||
|
|
||||||
|
# When invoked as main program, invoke the profiler on a script
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
770
Lib/calendar.py
Normal file
770
Lib/calendar.py
Normal file
|
@ -0,0 +1,770 @@
|
||||||
|
"""Calendar printing functions
|
||||||
|
|
||||||
|
Note when comparing these calendars to the ones printed by cal(1): By
|
||||||
|
default, these calendars have Monday as the first day of the week, and
|
||||||
|
Sunday as the last (the European convention). Use setfirstweekday() to
|
||||||
|
set the first day of the week (0=Monday, 6=Sunday)."""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import datetime
|
||||||
|
import locale as _locale
|
||||||
|
from itertools import repeat
|
||||||
|
|
||||||
|
__all__ = ["IllegalMonthError", "IllegalWeekdayError", "setfirstweekday",
|
||||||
|
"firstweekday", "isleap", "leapdays", "weekday", "monthrange",
|
||||||
|
"monthcalendar", "prmonth", "month", "prcal", "calendar",
|
||||||
|
"timegm", "month_name", "month_abbr", "day_name", "day_abbr",
|
||||||
|
"Calendar", "TextCalendar", "HTMLCalendar", "LocaleTextCalendar",
|
||||||
|
"LocaleHTMLCalendar", "weekheader"]
|
||||||
|
|
||||||
|
# Exception raised for bad input (with string parameter for details)
|
||||||
|
error = ValueError
|
||||||
|
|
||||||
|
# Exceptions raised for bad input
|
||||||
|
class IllegalMonthError(ValueError):
|
||||||
|
def __init__(self, month):
|
||||||
|
self.month = month
|
||||||
|
def __str__(self):
|
||||||
|
return "bad month number %r; must be 1-12" % self.month
|
||||||
|
|
||||||
|
|
||||||
|
class IllegalWeekdayError(ValueError):
|
||||||
|
def __init__(self, weekday):
|
||||||
|
self.weekday = weekday
|
||||||
|
def __str__(self):
|
||||||
|
return "bad weekday number %r; must be 0 (Monday) to 6 (Sunday)" % self.weekday
|
||||||
|
|
||||||
|
|
||||||
|
# Constants for months referenced later
|
||||||
|
January = 1
|
||||||
|
February = 2
|
||||||
|
|
||||||
|
# Number of days per month (except for February in leap years)
|
||||||
|
mdays = [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
|
||||||
|
|
||||||
|
# This module used to have hard-coded lists of day and month names, as
|
||||||
|
# English strings. The classes following emulate a read-only version of
|
||||||
|
# that, but supply localized names. Note that the values are computed
|
||||||
|
# fresh on each call, in case the user changes locale between calls.
|
||||||
|
|
||||||
|
class _localized_month:
|
||||||
|
|
||||||
|
_months = [datetime.date(2001, i+1, 1).strftime for i in range(12)]
|
||||||
|
_months.insert(0, lambda x: "")
|
||||||
|
|
||||||
|
def __init__(self, format):
|
||||||
|
self.format = format
|
||||||
|
|
||||||
|
def __getitem__(self, i):
|
||||||
|
funcs = self._months[i]
|
||||||
|
if isinstance(i, slice):
|
||||||
|
return [f(self.format) for f in funcs]
|
||||||
|
else:
|
||||||
|
return funcs(self.format)
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return 13
|
||||||
|
|
||||||
|
|
||||||
|
class _localized_day:
|
||||||
|
|
||||||
|
# January 1, 2001, was a Monday.
|
||||||
|
_days = [datetime.date(2001, 1, i+1).strftime for i in range(7)]
|
||||||
|
|
||||||
|
def __init__(self, format):
|
||||||
|
self.format = format
|
||||||
|
|
||||||
|
def __getitem__(self, i):
|
||||||
|
funcs = self._days[i]
|
||||||
|
if isinstance(i, slice):
|
||||||
|
return [f(self.format) for f in funcs]
|
||||||
|
else:
|
||||||
|
return funcs(self.format)
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return 7
|
||||||
|
|
||||||
|
|
||||||
|
# Full and abbreviated names of weekdays
|
||||||
|
day_name = _localized_day('%A')
|
||||||
|
day_abbr = _localized_day('%a')
|
||||||
|
|
||||||
|
# Full and abbreviated names of months (1-based arrays!!!)
|
||||||
|
month_name = _localized_month('%B')
|
||||||
|
month_abbr = _localized_month('%b')
|
||||||
|
|
||||||
|
# Constants for weekdays
|
||||||
|
(MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY) = range(7)
|
||||||
|
|
||||||
|
|
||||||
|
def isleap(year):
|
||||||
|
"""Return True for leap years, False for non-leap years."""
|
||||||
|
return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)
|
||||||
|
|
||||||
|
|
||||||
|
def leapdays(y1, y2):
|
||||||
|
"""Return number of leap years in range [y1, y2).
|
||||||
|
Assume y1 <= y2."""
|
||||||
|
y1 -= 1
|
||||||
|
y2 -= 1
|
||||||
|
return (y2//4 - y1//4) - (y2//100 - y1//100) + (y2//400 - y1//400)
|
||||||
|
|
||||||
|
|
||||||
|
def weekday(year, month, day):
|
||||||
|
"""Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31)."""
|
||||||
|
if not datetime.MINYEAR <= year <= datetime.MAXYEAR:
|
||||||
|
year = 2000 + year % 400
|
||||||
|
return datetime.date(year, month, day).weekday()
|
||||||
|
|
||||||
|
|
||||||
|
def monthrange(year, month):
|
||||||
|
"""Return weekday (0-6 ~ Mon-Sun) and number of days (28-31) for
|
||||||
|
year, month."""
|
||||||
|
if not 1 <= month <= 12:
|
||||||
|
raise IllegalMonthError(month)
|
||||||
|
day1 = weekday(year, month, 1)
|
||||||
|
ndays = mdays[month] + (month == February and isleap(year))
|
||||||
|
return day1, ndays
|
||||||
|
|
||||||
|
|
||||||
|
def monthlen(year, month):
|
||||||
|
return mdays[month] + (month == February and isleap(year))
|
||||||
|
|
||||||
|
|
||||||
|
def prevmonth(year, month):
|
||||||
|
if month == 1:
|
||||||
|
return year-1, 12
|
||||||
|
else:
|
||||||
|
return year, month-1
|
||||||
|
|
||||||
|
|
||||||
|
def nextmonth(year, month):
|
||||||
|
if month == 12:
|
||||||
|
return year+1, 1
|
||||||
|
else:
|
||||||
|
return year, month+1
|
||||||
|
|
||||||
|
|
||||||
|
class Calendar(object):
|
||||||
|
"""
|
||||||
|
Base calendar class. This class doesn't do any formatting. It simply
|
||||||
|
provides data to subclasses.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, firstweekday=0):
|
||||||
|
self.firstweekday = firstweekday # 0 = Monday, 6 = Sunday
|
||||||
|
|
||||||
|
def getfirstweekday(self):
|
||||||
|
return self._firstweekday % 7
|
||||||
|
|
||||||
|
def setfirstweekday(self, firstweekday):
|
||||||
|
self._firstweekday = firstweekday
|
||||||
|
|
||||||
|
firstweekday = property(getfirstweekday, setfirstweekday)
|
||||||
|
|
||||||
|
def iterweekdays(self):
|
||||||
|
"""
|
||||||
|
Return an iterator for one week of weekday numbers starting with the
|
||||||
|
configured first one.
|
||||||
|
"""
|
||||||
|
for i in range(self.firstweekday, self.firstweekday + 7):
|
||||||
|
yield i%7
|
||||||
|
|
||||||
|
def itermonthdates(self, year, month):
|
||||||
|
"""
|
||||||
|
Return an iterator for one month. The iterator will yield datetime.date
|
||||||
|
values and will always iterate through complete weeks, so it will yield
|
||||||
|
dates outside the specified month.
|
||||||
|
"""
|
||||||
|
for y, m, d in self.itermonthdays3(year, month):
|
||||||
|
yield datetime.date(y, m, d)
|
||||||
|
|
||||||
|
def itermonthdays(self, year, month):
|
||||||
|
"""
|
||||||
|
Like itermonthdates(), but will yield day numbers. For days outside
|
||||||
|
the specified month the day number is 0.
|
||||||
|
"""
|
||||||
|
day1, ndays = monthrange(year, month)
|
||||||
|
days_before = (day1 - self.firstweekday) % 7
|
||||||
|
yield from repeat(0, days_before)
|
||||||
|
yield from range(1, ndays + 1)
|
||||||
|
days_after = (self.firstweekday - day1 - ndays) % 7
|
||||||
|
yield from repeat(0, days_after)
|
||||||
|
|
||||||
|
def itermonthdays2(self, year, month):
|
||||||
|
"""
|
||||||
|
Like itermonthdates(), but will yield (day number, weekday number)
|
||||||
|
tuples. For days outside the specified month the day number is 0.
|
||||||
|
"""
|
||||||
|
for i, d in enumerate(self.itermonthdays(year, month), self.firstweekday):
|
||||||
|
yield d, i % 7
|
||||||
|
|
||||||
|
def itermonthdays3(self, year, month):
|
||||||
|
"""
|
||||||
|
Like itermonthdates(), but will yield (year, month, day) tuples. Can be
|
||||||
|
used for dates outside of datetime.date range.
|
||||||
|
"""
|
||||||
|
day1, ndays = monthrange(year, month)
|
||||||
|
days_before = (day1 - self.firstweekday) % 7
|
||||||
|
days_after = (self.firstweekday - day1 - ndays) % 7
|
||||||
|
y, m = prevmonth(year, month)
|
||||||
|
end = monthlen(y, m) + 1
|
||||||
|
for d in range(end-days_before, end):
|
||||||
|
yield y, m, d
|
||||||
|
for d in range(1, ndays + 1):
|
||||||
|
yield year, month, d
|
||||||
|
y, m = nextmonth(year, month)
|
||||||
|
for d in range(1, days_after + 1):
|
||||||
|
yield y, m, d
|
||||||
|
|
||||||
|
def itermonthdays4(self, year, month):
|
||||||
|
"""
|
||||||
|
Like itermonthdates(), but will yield (year, month, day, day_of_week) tuples.
|
||||||
|
Can be used for dates outside of datetime.date range.
|
||||||
|
"""
|
||||||
|
for i, (y, m, d) in enumerate(self.itermonthdays3(year, month)):
|
||||||
|
yield y, m, d, (self.firstweekday + i) % 7
|
||||||
|
|
||||||
|
def monthdatescalendar(self, year, month):
|
||||||
|
"""
|
||||||
|
Return a matrix (list of lists) representing a month's calendar.
|
||||||
|
Each row represents a week; week entries are datetime.date values.
|
||||||
|
"""
|
||||||
|
dates = list(self.itermonthdates(year, month))
|
||||||
|
return [ dates[i:i+7] for i in range(0, len(dates), 7) ]
|
||||||
|
|
||||||
|
def monthdays2calendar(self, year, month):
|
||||||
|
"""
|
||||||
|
Return a matrix representing a month's calendar.
|
||||||
|
Each row represents a week; week entries are
|
||||||
|
(day number, weekday number) tuples. Day numbers outside this month
|
||||||
|
are zero.
|
||||||
|
"""
|
||||||
|
days = list(self.itermonthdays2(year, month))
|
||||||
|
return [ days[i:i+7] for i in range(0, len(days), 7) ]
|
||||||
|
|
||||||
|
def monthdayscalendar(self, year, month):
|
||||||
|
"""
|
||||||
|
Return a matrix representing a month's calendar.
|
||||||
|
Each row represents a week; days outside this month are zero.
|
||||||
|
"""
|
||||||
|
days = list(self.itermonthdays(year, month))
|
||||||
|
return [ days[i:i+7] for i in range(0, len(days), 7) ]
|
||||||
|
|
||||||
|
def yeardatescalendar(self, year, width=3):
|
||||||
|
"""
|
||||||
|
Return the data for the specified year ready for formatting. The return
|
||||||
|
value is a list of month rows. Each month row contains up to width months.
|
||||||
|
Each month contains between 4 and 6 weeks and each week contains 1-7
|
||||||
|
days. Days are datetime.date objects.
|
||||||
|
"""
|
||||||
|
months = [
|
||||||
|
self.monthdatescalendar(year, i)
|
||||||
|
for i in range(January, January+12)
|
||||||
|
]
|
||||||
|
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||||
|
|
||||||
|
def yeardays2calendar(self, year, width=3):
|
||||||
|
"""
|
||||||
|
Return the data for the specified year ready for formatting (similar to
|
||||||
|
yeardatescalendar()). Entries in the week lists are
|
||||||
|
(day number, weekday number) tuples. Day numbers outside this month are
|
||||||
|
zero.
|
||||||
|
"""
|
||||||
|
months = [
|
||||||
|
self.monthdays2calendar(year, i)
|
||||||
|
for i in range(January, January+12)
|
||||||
|
]
|
||||||
|
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||||
|
|
||||||
|
def yeardayscalendar(self, year, width=3):
|
||||||
|
"""
|
||||||
|
Return the data for the specified year ready for formatting (similar to
|
||||||
|
yeardatescalendar()). Entries in the week lists are day numbers.
|
||||||
|
Day numbers outside this month are zero.
|
||||||
|
"""
|
||||||
|
months = [
|
||||||
|
self.monthdayscalendar(year, i)
|
||||||
|
for i in range(January, January+12)
|
||||||
|
]
|
||||||
|
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||||
|
|
||||||
|
|
||||||
|
class TextCalendar(Calendar):
|
||||||
|
"""
|
||||||
|
Subclass of Calendar that outputs a calendar as a simple plain text
|
||||||
|
similar to the UNIX program cal.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def prweek(self, theweek, width):
|
||||||
|
"""
|
||||||
|
Print a single week (no newline).
|
||||||
|
"""
|
||||||
|
print(self.formatweek(theweek, width), end='')
|
||||||
|
|
||||||
|
def formatday(self, day, weekday, width):
|
||||||
|
"""
|
||||||
|
Returns a formatted day.
|
||||||
|
"""
|
||||||
|
if day == 0:
|
||||||
|
s = ''
|
||||||
|
else:
|
||||||
|
s = '%2i' % day # right-align single-digit days
|
||||||
|
return s.center(width)
|
||||||
|
|
||||||
|
def formatweek(self, theweek, width):
|
||||||
|
"""
|
||||||
|
Returns a single week in a string (no newline).
|
||||||
|
"""
|
||||||
|
return ' '.join(self.formatday(d, wd, width) for (d, wd) in theweek)
|
||||||
|
|
||||||
|
def formatweekday(self, day, width):
|
||||||
|
"""
|
||||||
|
Returns a formatted week day name.
|
||||||
|
"""
|
||||||
|
if width >= 9:
|
||||||
|
names = day_name
|
||||||
|
else:
|
||||||
|
names = day_abbr
|
||||||
|
return names[day][:width].center(width)
|
||||||
|
|
||||||
|
def formatweekheader(self, width):
|
||||||
|
"""
|
||||||
|
Return a header for a week.
|
||||||
|
"""
|
||||||
|
return ' '.join(self.formatweekday(i, width) for i in self.iterweekdays())
|
||||||
|
|
||||||
|
def formatmonthname(self, theyear, themonth, width, withyear=True):
|
||||||
|
"""
|
||||||
|
Return a formatted month name.
|
||||||
|
"""
|
||||||
|
s = month_name[themonth]
|
||||||
|
if withyear:
|
||||||
|
s = "%s %r" % (s, theyear)
|
||||||
|
return s.center(width)
|
||||||
|
|
||||||
|
def prmonth(self, theyear, themonth, w=0, l=0):
|
||||||
|
"""
|
||||||
|
Print a month's calendar.
|
||||||
|
"""
|
||||||
|
print(self.formatmonth(theyear, themonth, w, l), end='')
|
||||||
|
|
||||||
|
def formatmonth(self, theyear, themonth, w=0, l=0):
|
||||||
|
"""
|
||||||
|
Return a month's calendar string (multi-line).
|
||||||
|
"""
|
||||||
|
w = max(2, w)
|
||||||
|
l = max(1, l)
|
||||||
|
s = self.formatmonthname(theyear, themonth, 7 * (w + 1) - 1)
|
||||||
|
s = s.rstrip()
|
||||||
|
s += '\n' * l
|
||||||
|
s += self.formatweekheader(w).rstrip()
|
||||||
|
s += '\n' * l
|
||||||
|
for week in self.monthdays2calendar(theyear, themonth):
|
||||||
|
s += self.formatweek(week, w).rstrip()
|
||||||
|
s += '\n' * l
|
||||||
|
return s
|
||||||
|
|
||||||
|
def formatyear(self, theyear, w=2, l=1, c=6, m=3):
|
||||||
|
"""
|
||||||
|
Returns a year's calendar as a multi-line string.
|
||||||
|
"""
|
||||||
|
w = max(2, w)
|
||||||
|
l = max(1, l)
|
||||||
|
c = max(2, c)
|
||||||
|
colwidth = (w + 1) * 7 - 1
|
||||||
|
v = []
|
||||||
|
a = v.append
|
||||||
|
a(repr(theyear).center(colwidth*m+c*(m-1)).rstrip())
|
||||||
|
a('\n'*l)
|
||||||
|
header = self.formatweekheader(w)
|
||||||
|
for (i, row) in enumerate(self.yeardays2calendar(theyear, m)):
|
||||||
|
# months in this row
|
||||||
|
months = range(m*i+1, min(m*(i+1)+1, 13))
|
||||||
|
a('\n'*l)
|
||||||
|
names = (self.formatmonthname(theyear, k, colwidth, False)
|
||||||
|
for k in months)
|
||||||
|
a(formatstring(names, colwidth, c).rstrip())
|
||||||
|
a('\n'*l)
|
||||||
|
headers = (header for k in months)
|
||||||
|
a(formatstring(headers, colwidth, c).rstrip())
|
||||||
|
a('\n'*l)
|
||||||
|
# max number of weeks for this row
|
||||||
|
height = max(len(cal) for cal in row)
|
||||||
|
for j in range(height):
|
||||||
|
weeks = []
|
||||||
|
for cal in row:
|
||||||
|
if j >= len(cal):
|
||||||
|
weeks.append('')
|
||||||
|
else:
|
||||||
|
weeks.append(self.formatweek(cal[j], w))
|
||||||
|
a(formatstring(weeks, colwidth, c).rstrip())
|
||||||
|
a('\n' * l)
|
||||||
|
return ''.join(v)
|
||||||
|
|
||||||
|
def pryear(self, theyear, w=0, l=0, c=6, m=3):
|
||||||
|
"""Print a year's calendar."""
|
||||||
|
print(self.formatyear(theyear, w, l, c, m), end='')
|
||||||
|
|
||||||
|
|
||||||
|
class HTMLCalendar(Calendar):
|
||||||
|
"""
|
||||||
|
This calendar returns complete HTML pages.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# CSS classes for the day <td>s
|
||||||
|
cssclasses = ["mon", "tue", "wed", "thu", "fri", "sat", "sun"]
|
||||||
|
|
||||||
|
# CSS classes for the day <th>s
|
||||||
|
cssclasses_weekday_head = cssclasses
|
||||||
|
|
||||||
|
# CSS class for the days before and after current month
|
||||||
|
cssclass_noday = "noday"
|
||||||
|
|
||||||
|
# CSS class for the month's head
|
||||||
|
cssclass_month_head = "month"
|
||||||
|
|
||||||
|
# CSS class for the month
|
||||||
|
cssclass_month = "month"
|
||||||
|
|
||||||
|
# CSS class for the year's table head
|
||||||
|
cssclass_year_head = "year"
|
||||||
|
|
||||||
|
# CSS class for the whole year table
|
||||||
|
cssclass_year = "year"
|
||||||
|
|
||||||
|
def formatday(self, day, weekday):
|
||||||
|
"""
|
||||||
|
Return a day as a table cell.
|
||||||
|
"""
|
||||||
|
if day == 0:
|
||||||
|
# day outside month
|
||||||
|
return '<td class="%s"> </td>' % self.cssclass_noday
|
||||||
|
else:
|
||||||
|
return '<td class="%s">%d</td>' % (self.cssclasses[weekday], day)
|
||||||
|
|
||||||
|
def formatweek(self, theweek):
|
||||||
|
"""
|
||||||
|
Return a complete week as a table row.
|
||||||
|
"""
|
||||||
|
s = ''.join(self.formatday(d, wd) for (d, wd) in theweek)
|
||||||
|
return '<tr>%s</tr>' % s
|
||||||
|
|
||||||
|
def formatweekday(self, day):
|
||||||
|
"""
|
||||||
|
Return a weekday name as a table header.
|
||||||
|
"""
|
||||||
|
return '<th class="%s">%s</th>' % (
|
||||||
|
self.cssclasses_weekday_head[day], day_abbr[day])
|
||||||
|
|
||||||
|
def formatweekheader(self):
|
||||||
|
"""
|
||||||
|
Return a header for a week as a table row.
|
||||||
|
"""
|
||||||
|
s = ''.join(self.formatweekday(i) for i in self.iterweekdays())
|
||||||
|
return '<tr>%s</tr>' % s
|
||||||
|
|
||||||
|
def formatmonthname(self, theyear, themonth, withyear=True):
|
||||||
|
"""
|
||||||
|
Return a month name as a table row.
|
||||||
|
"""
|
||||||
|
if withyear:
|
||||||
|
s = '%s %s' % (month_name[themonth], theyear)
|
||||||
|
else:
|
||||||
|
s = '%s' % month_name[themonth]
|
||||||
|
return '<tr><th colspan="7" class="%s">%s</th></tr>' % (
|
||||||
|
self.cssclass_month_head, s)
|
||||||
|
|
||||||
|
def formatmonth(self, theyear, themonth, withyear=True):
|
||||||
|
"""
|
||||||
|
Return a formatted month as a table.
|
||||||
|
"""
|
||||||
|
v = []
|
||||||
|
a = v.append
|
||||||
|
a('<table border="0" cellpadding="0" cellspacing="0" class="%s">' % (
|
||||||
|
self.cssclass_month))
|
||||||
|
a('\n')
|
||||||
|
a(self.formatmonthname(theyear, themonth, withyear=withyear))
|
||||||
|
a('\n')
|
||||||
|
a(self.formatweekheader())
|
||||||
|
a('\n')
|
||||||
|
for week in self.monthdays2calendar(theyear, themonth):
|
||||||
|
a(self.formatweek(week))
|
||||||
|
a('\n')
|
||||||
|
a('</table>')
|
||||||
|
a('\n')
|
||||||
|
return ''.join(v)
|
||||||
|
|
||||||
|
def formatyear(self, theyear, width=3):
|
||||||
|
"""
|
||||||
|
Return a formatted year as a table of tables.
|
||||||
|
"""
|
||||||
|
v = []
|
||||||
|
a = v.append
|
||||||
|
width = max(width, 1)
|
||||||
|
a('<table border="0" cellpadding="0" cellspacing="0" class="%s">' %
|
||||||
|
self.cssclass_year)
|
||||||
|
a('\n')
|
||||||
|
a('<tr><th colspan="%d" class="%s">%s</th></tr>' % (
|
||||||
|
width, self.cssclass_year_head, theyear))
|
||||||
|
for i in range(January, January+12, width):
|
||||||
|
# months in this row
|
||||||
|
months = range(i, min(i+width, 13))
|
||||||
|
a('<tr>')
|
||||||
|
for m in months:
|
||||||
|
a('<td>')
|
||||||
|
a(self.formatmonth(theyear, m, withyear=False))
|
||||||
|
a('</td>')
|
||||||
|
a('</tr>')
|
||||||
|
a('</table>')
|
||||||
|
return ''.join(v)
|
||||||
|
|
||||||
|
def formatyearpage(self, theyear, width=3, css='calendar.css', encoding=None):
|
||||||
|
"""
|
||||||
|
Return a formatted year as a complete HTML page.
|
||||||
|
"""
|
||||||
|
if encoding is None:
|
||||||
|
encoding = sys.getdefaultencoding()
|
||||||
|
v = []
|
||||||
|
a = v.append
|
||||||
|
a('<?xml version="1.0" encoding="%s"?>\n' % encoding)
|
||||||
|
a('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">\n')
|
||||||
|
a('<html>\n')
|
||||||
|
a('<head>\n')
|
||||||
|
a('<meta http-equiv="Content-Type" content="text/html; charset=%s" />\n' % encoding)
|
||||||
|
if css is not None:
|
||||||
|
a('<link rel="stylesheet" type="text/css" href="%s" />\n' % css)
|
||||||
|
a('<title>Calendar for %d</title>\n' % theyear)
|
||||||
|
a('</head>\n')
|
||||||
|
a('<body>\n')
|
||||||
|
a(self.formatyear(theyear, width))
|
||||||
|
a('</body>\n')
|
||||||
|
a('</html>\n')
|
||||||
|
return ''.join(v).encode(encoding, "xmlcharrefreplace")
|
||||||
|
|
||||||
|
|
||||||
|
class different_locale:
|
||||||
|
def __init__(self, locale):
|
||||||
|
self.locale = locale
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.oldlocale = _locale.getlocale(_locale.LC_TIME)
|
||||||
|
_locale.setlocale(_locale.LC_TIME, self.locale)
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
_locale.setlocale(_locale.LC_TIME, self.oldlocale)
|
||||||
|
|
||||||
|
|
||||||
|
class LocaleTextCalendar(TextCalendar):
|
||||||
|
"""
|
||||||
|
This class can be passed a locale name in the constructor and will return
|
||||||
|
month and weekday names in the specified locale. If this locale includes
|
||||||
|
an encoding all strings containing month and weekday names will be returned
|
||||||
|
as unicode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, firstweekday=0, locale=None):
|
||||||
|
TextCalendar.__init__(self, firstweekday)
|
||||||
|
if locale is None:
|
||||||
|
locale = _locale.getdefaultlocale()
|
||||||
|
self.locale = locale
|
||||||
|
|
||||||
|
def formatweekday(self, day, width):
|
||||||
|
with different_locale(self.locale):
|
||||||
|
if width >= 9:
|
||||||
|
names = day_name
|
||||||
|
else:
|
||||||
|
names = day_abbr
|
||||||
|
name = names[day]
|
||||||
|
return name[:width].center(width)
|
||||||
|
|
||||||
|
def formatmonthname(self, theyear, themonth, width, withyear=True):
|
||||||
|
with different_locale(self.locale):
|
||||||
|
s = month_name[themonth]
|
||||||
|
if withyear:
|
||||||
|
s = "%s %r" % (s, theyear)
|
||||||
|
return s.center(width)
|
||||||
|
|
||||||
|
|
||||||
|
class LocaleHTMLCalendar(HTMLCalendar):
|
||||||
|
"""
|
||||||
|
This class can be passed a locale name in the constructor and will return
|
||||||
|
month and weekday names in the specified locale. If this locale includes
|
||||||
|
an encoding all strings containing month and weekday names will be returned
|
||||||
|
as unicode.
|
||||||
|
"""
|
||||||
|
def __init__(self, firstweekday=0, locale=None):
|
||||||
|
HTMLCalendar.__init__(self, firstweekday)
|
||||||
|
if locale is None:
|
||||||
|
locale = _locale.getdefaultlocale()
|
||||||
|
self.locale = locale
|
||||||
|
|
||||||
|
def formatweekday(self, day):
|
||||||
|
with different_locale(self.locale):
|
||||||
|
s = day_abbr[day]
|
||||||
|
return '<th class="%s">%s</th>' % (self.cssclasses[day], s)
|
||||||
|
|
||||||
|
def formatmonthname(self, theyear, themonth, withyear=True):
|
||||||
|
with different_locale(self.locale):
|
||||||
|
s = month_name[themonth]
|
||||||
|
if withyear:
|
||||||
|
s = '%s %s' % (s, theyear)
|
||||||
|
return '<tr><th colspan="7" class="month">%s</th></tr>' % s
|
||||||
|
|
||||||
|
|
||||||
|
# Support for old module level interface
|
||||||
|
c = TextCalendar()
|
||||||
|
|
||||||
|
firstweekday = c.getfirstweekday
|
||||||
|
|
||||||
|
def setfirstweekday(firstweekday):
|
||||||
|
if not MONDAY <= firstweekday <= SUNDAY:
|
||||||
|
raise IllegalWeekdayError(firstweekday)
|
||||||
|
c.firstweekday = firstweekday
|
||||||
|
|
||||||
|
monthcalendar = c.monthdayscalendar
|
||||||
|
prweek = c.prweek
|
||||||
|
week = c.formatweek
|
||||||
|
weekheader = c.formatweekheader
|
||||||
|
prmonth = c.prmonth
|
||||||
|
month = c.formatmonth
|
||||||
|
calendar = c.formatyear
|
||||||
|
prcal = c.pryear
|
||||||
|
|
||||||
|
|
||||||
|
# Spacing of month columns for multi-column year calendar
|
||||||
|
_colwidth = 7*3 - 1 # Amount printed by prweek()
|
||||||
|
_spacing = 6 # Number of spaces between columns
|
||||||
|
|
||||||
|
|
||||||
|
def format(cols, colwidth=_colwidth, spacing=_spacing):
|
||||||
|
"""Prints multi-column formatting for year calendars"""
|
||||||
|
print(formatstring(cols, colwidth, spacing))
|
||||||
|
|
||||||
|
|
||||||
|
def formatstring(cols, colwidth=_colwidth, spacing=_spacing):
|
||||||
|
"""Returns a string formatted from n strings, centered within n columns."""
|
||||||
|
spacing *= ' '
|
||||||
|
return spacing.join(c.center(colwidth) for c in cols)
|
||||||
|
|
||||||
|
|
||||||
|
EPOCH = 1970
|
||||||
|
_EPOCH_ORD = datetime.date(EPOCH, 1, 1).toordinal()
|
||||||
|
|
||||||
|
|
||||||
|
def timegm(tuple):
|
||||||
|
"""Unrelated but handy function to calculate Unix timestamp from GMT."""
|
||||||
|
year, month, day, hour, minute, second = tuple[:6]
|
||||||
|
days = datetime.date(year, month, 1).toordinal() - _EPOCH_ORD + day - 1
|
||||||
|
hours = days*24 + hour
|
||||||
|
minutes = hours*60 + minute
|
||||||
|
seconds = minutes*60 + second
|
||||||
|
return seconds
|
||||||
|
|
||||||
|
|
||||||
|
def main(args):
|
||||||
|
import argparse
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
textgroup = parser.add_argument_group('text only arguments')
|
||||||
|
htmlgroup = parser.add_argument_group('html only arguments')
|
||||||
|
textgroup.add_argument(
|
||||||
|
"-w", "--width",
|
||||||
|
type=int, default=2,
|
||||||
|
help="width of date column (default 2)"
|
||||||
|
)
|
||||||
|
textgroup.add_argument(
|
||||||
|
"-l", "--lines",
|
||||||
|
type=int, default=1,
|
||||||
|
help="number of lines for each week (default 1)"
|
||||||
|
)
|
||||||
|
textgroup.add_argument(
|
||||||
|
"-s", "--spacing",
|
||||||
|
type=int, default=6,
|
||||||
|
help="spacing between months (default 6)"
|
||||||
|
)
|
||||||
|
textgroup.add_argument(
|
||||||
|
"-m", "--months",
|
||||||
|
type=int, default=3,
|
||||||
|
help="months per row (default 3)"
|
||||||
|
)
|
||||||
|
htmlgroup.add_argument(
|
||||||
|
"-c", "--css",
|
||||||
|
default="calendar.css",
|
||||||
|
help="CSS to use for page"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-L", "--locale",
|
||||||
|
default=None,
|
||||||
|
help="locale to be used from month and weekday names"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-e", "--encoding",
|
||||||
|
default=None,
|
||||||
|
help="encoding to use for output"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-t", "--type",
|
||||||
|
default="text",
|
||||||
|
choices=("text", "html"),
|
||||||
|
help="output type (text or html)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"year",
|
||||||
|
nargs='?', type=int,
|
||||||
|
help="year number (1-9999)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"month",
|
||||||
|
nargs='?', type=int,
|
||||||
|
help="month number (1-12, text only)"
|
||||||
|
)
|
||||||
|
|
||||||
|
options = parser.parse_args(args[1:])
|
||||||
|
|
||||||
|
if options.locale and not options.encoding:
|
||||||
|
parser.error("if --locale is specified --encoding is required")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
locale = options.locale, options.encoding
|
||||||
|
|
||||||
|
if options.type == "html":
|
||||||
|
if options.locale:
|
||||||
|
cal = LocaleHTMLCalendar(locale=locale)
|
||||||
|
else:
|
||||||
|
cal = HTMLCalendar()
|
||||||
|
encoding = options.encoding
|
||||||
|
if encoding is None:
|
||||||
|
encoding = sys.getdefaultencoding()
|
||||||
|
optdict = dict(encoding=encoding, css=options.css)
|
||||||
|
write = sys.stdout.buffer.write
|
||||||
|
if options.year is None:
|
||||||
|
write(cal.formatyearpage(datetime.date.today().year, **optdict))
|
||||||
|
elif options.month is None:
|
||||||
|
write(cal.formatyearpage(options.year, **optdict))
|
||||||
|
else:
|
||||||
|
parser.error("incorrect number of arguments")
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
if options.locale:
|
||||||
|
cal = LocaleTextCalendar(locale=locale)
|
||||||
|
else:
|
||||||
|
cal = TextCalendar()
|
||||||
|
optdict = dict(w=options.width, l=options.lines)
|
||||||
|
if options.month is None:
|
||||||
|
optdict["c"] = options.spacing
|
||||||
|
optdict["m"] = options.months
|
||||||
|
if options.year is None:
|
||||||
|
result = cal.formatyear(datetime.date.today().year, **optdict)
|
||||||
|
elif options.month is None:
|
||||||
|
result = cal.formatyear(options.year, **optdict)
|
||||||
|
else:
|
||||||
|
result = cal.formatmonth(options.year, options.month, **optdict)
|
||||||
|
write = sys.stdout.write
|
||||||
|
if options.encoding:
|
||||||
|
result = result.encode(options.encoding)
|
||||||
|
write = sys.stdout.buffer.write
|
||||||
|
write(result)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main(sys.argv)
|
1019
Lib/cgi.py
Normal file
1019
Lib/cgi.py
Normal file
File diff suppressed because it is too large
Load diff
319
Lib/cgitb.py
Normal file
319
Lib/cgitb.py
Normal file
|
@ -0,0 +1,319 @@
|
||||||
|
"""More comprehensive traceback formatting for Python scripts.
|
||||||
|
|
||||||
|
To enable this module, do:
|
||||||
|
|
||||||
|
import cgitb; cgitb.enable()
|
||||||
|
|
||||||
|
at the top of your script. The optional arguments to enable() are:
|
||||||
|
|
||||||
|
display - if true, tracebacks are displayed in the web browser
|
||||||
|
logdir - if set, tracebacks are written to files in this directory
|
||||||
|
context - number of lines of source code to show for each stack frame
|
||||||
|
format - 'text' or 'html' controls the output format
|
||||||
|
|
||||||
|
By default, tracebacks are displayed but not saved, the context is 5 lines
|
||||||
|
and the output format is 'html' (for backwards compatibility with the
|
||||||
|
original use of this module)
|
||||||
|
|
||||||
|
Alternatively, if you have caught an exception and want cgitb to display it
|
||||||
|
for you, call cgitb.handler(). The optional argument to handler() is a
|
||||||
|
3-item tuple (etype, evalue, etb) just like the value of sys.exc_info().
|
||||||
|
The default handler displays output as HTML.
|
||||||
|
|
||||||
|
"""
|
||||||
|
import inspect
|
||||||
|
import keyword
|
||||||
|
import linecache
|
||||||
|
import os
|
||||||
|
import pydoc
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import tokenize
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
def reset():
|
||||||
|
"""Return a string that resets the CGI and browser to a known state."""
|
||||||
|
return '''<!--: spam
|
||||||
|
Content-Type: text/html
|
||||||
|
|
||||||
|
<body bgcolor="#f0f0f8"><font color="#f0f0f8" size="-5"> -->
|
||||||
|
<body bgcolor="#f0f0f8"><font color="#f0f0f8" size="-5"> --> -->
|
||||||
|
</font> </font> </font> </script> </object> </blockquote> </pre>
|
||||||
|
</table> </table> </table> </table> </table> </font> </font> </font>'''
|
||||||
|
|
||||||
|
__UNDEF__ = [] # a special sentinel object
|
||||||
|
def small(text):
|
||||||
|
if text:
|
||||||
|
return '<small>' + text + '</small>'
|
||||||
|
else:
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def strong(text):
|
||||||
|
if text:
|
||||||
|
return '<strong>' + text + '</strong>'
|
||||||
|
else:
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def grey(text):
|
||||||
|
if text:
|
||||||
|
return '<font color="#909090">' + text + '</font>'
|
||||||
|
else:
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def lookup(name, frame, locals):
|
||||||
|
"""Find the value for a given name in the given environment."""
|
||||||
|
if name in locals:
|
||||||
|
return 'local', locals[name]
|
||||||
|
if name in frame.f_globals:
|
||||||
|
return 'global', frame.f_globals[name]
|
||||||
|
if '__builtins__' in frame.f_globals:
|
||||||
|
builtins = frame.f_globals['__builtins__']
|
||||||
|
if type(builtins) is type({}):
|
||||||
|
if name in builtins:
|
||||||
|
return 'builtin', builtins[name]
|
||||||
|
else:
|
||||||
|
if hasattr(builtins, name):
|
||||||
|
return 'builtin', getattr(builtins, name)
|
||||||
|
return None, __UNDEF__
|
||||||
|
|
||||||
|
def scanvars(reader, frame, locals):
|
||||||
|
"""Scan one logical line of Python and look up values of variables used."""
|
||||||
|
vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__
|
||||||
|
for ttype, token, start, end, line in tokenize.generate_tokens(reader):
|
||||||
|
if ttype == tokenize.NEWLINE: break
|
||||||
|
if ttype == tokenize.NAME and token not in keyword.kwlist:
|
||||||
|
if lasttoken == '.':
|
||||||
|
if parent is not __UNDEF__:
|
||||||
|
value = getattr(parent, token, __UNDEF__)
|
||||||
|
vars.append((prefix + token, prefix, value))
|
||||||
|
else:
|
||||||
|
where, value = lookup(token, frame, locals)
|
||||||
|
vars.append((token, where, value))
|
||||||
|
elif token == '.':
|
||||||
|
prefix += lasttoken + '.'
|
||||||
|
parent = value
|
||||||
|
else:
|
||||||
|
parent, prefix = None, ''
|
||||||
|
lasttoken = token
|
||||||
|
return vars
|
||||||
|
|
||||||
|
def html(einfo, context=5):
|
||||||
|
"""Return a nice HTML document describing a given traceback."""
|
||||||
|
etype, evalue, etb = einfo
|
||||||
|
if isinstance(etype, type):
|
||||||
|
etype = etype.__name__
|
||||||
|
pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
|
||||||
|
date = time.ctime(time.time())
|
||||||
|
head = '<body bgcolor="#f0f0f8">' + pydoc.html.heading(
|
||||||
|
'<big><big>%s</big></big>' %
|
||||||
|
strong(pydoc.html.escape(str(etype))),
|
||||||
|
'#ffffff', '#6622aa', pyver + '<br>' + date) + '''
|
||||||
|
<p>A problem occurred in a Python script. Here is the sequence of
|
||||||
|
function calls leading up to the error, in the order they occurred.</p>'''
|
||||||
|
|
||||||
|
indent = '<tt>' + small(' ' * 5) + ' </tt>'
|
||||||
|
frames = []
|
||||||
|
records = inspect.getinnerframes(etb, context)
|
||||||
|
for frame, file, lnum, func, lines, index in records:
|
||||||
|
if file:
|
||||||
|
file = os.path.abspath(file)
|
||||||
|
link = '<a href="file://%s">%s</a>' % (file, pydoc.html.escape(file))
|
||||||
|
else:
|
||||||
|
file = link = '?'
|
||||||
|
args, varargs, varkw, locals = inspect.getargvalues(frame)
|
||||||
|
call = ''
|
||||||
|
if func != '?':
|
||||||
|
call = 'in ' + strong(pydoc.html.escape(func)) + \
|
||||||
|
inspect.formatargvalues(args, varargs, varkw, locals,
|
||||||
|
formatvalue=lambda value: '=' + pydoc.html.repr(value))
|
||||||
|
|
||||||
|
highlight = {}
|
||||||
|
def reader(lnum=[lnum]):
|
||||||
|
highlight[lnum[0]] = 1
|
||||||
|
try: return linecache.getline(file, lnum[0])
|
||||||
|
finally: lnum[0] += 1
|
||||||
|
vars = scanvars(reader, frame, locals)
|
||||||
|
|
||||||
|
rows = ['<tr><td bgcolor="#d8bbff">%s%s %s</td></tr>' %
|
||||||
|
('<big> </big>', link, call)]
|
||||||
|
if index is not None:
|
||||||
|
i = lnum - index
|
||||||
|
for line in lines:
|
||||||
|
num = small(' ' * (5-len(str(i))) + str(i)) + ' '
|
||||||
|
if i in highlight:
|
||||||
|
line = '<tt>=>%s%s</tt>' % (num, pydoc.html.preformat(line))
|
||||||
|
rows.append('<tr><td bgcolor="#ffccee">%s</td></tr>' % line)
|
||||||
|
else:
|
||||||
|
line = '<tt> %s%s</tt>' % (num, pydoc.html.preformat(line))
|
||||||
|
rows.append('<tr><td>%s</td></tr>' % grey(line))
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
done, dump = {}, []
|
||||||
|
for name, where, value in vars:
|
||||||
|
if name in done: continue
|
||||||
|
done[name] = 1
|
||||||
|
if value is not __UNDEF__:
|
||||||
|
if where in ('global', 'builtin'):
|
||||||
|
name = ('<em>%s</em> ' % where) + strong(name)
|
||||||
|
elif where == 'local':
|
||||||
|
name = strong(name)
|
||||||
|
else:
|
||||||
|
name = where + strong(name.split('.')[-1])
|
||||||
|
dump.append('%s = %s' % (name, pydoc.html.repr(value)))
|
||||||
|
else:
|
||||||
|
dump.append(name + ' <em>undefined</em>')
|
||||||
|
|
||||||
|
rows.append('<tr><td>%s</td></tr>' % small(grey(', '.join(dump))))
|
||||||
|
frames.append('''
|
||||||
|
<table width="100%%" cellspacing=0 cellpadding=0 border=0>
|
||||||
|
%s</table>''' % '\n'.join(rows))
|
||||||
|
|
||||||
|
exception = ['<p>%s: %s' % (strong(pydoc.html.escape(str(etype))),
|
||||||
|
pydoc.html.escape(str(evalue)))]
|
||||||
|
for name in dir(evalue):
|
||||||
|
if name[:1] == '_': continue
|
||||||
|
value = pydoc.html.repr(getattr(evalue, name))
|
||||||
|
exception.append('\n<br>%s%s =\n%s' % (indent, name, value))
|
||||||
|
|
||||||
|
return head + ''.join(frames) + ''.join(exception) + '''
|
||||||
|
|
||||||
|
|
||||||
|
<!-- The above is a description of an error in a Python program, formatted
|
||||||
|
for a Web browser because the 'cgitb' module was enabled. In case you
|
||||||
|
are not reading this in a Web browser, here is the original traceback:
|
||||||
|
|
||||||
|
%s
|
||||||
|
-->
|
||||||
|
''' % pydoc.html.escape(
|
||||||
|
''.join(traceback.format_exception(etype, evalue, etb)))
|
||||||
|
|
||||||
|
def text(einfo, context=5):
|
||||||
|
"""Return a plain text document describing a given traceback."""
|
||||||
|
etype, evalue, etb = einfo
|
||||||
|
if isinstance(etype, type):
|
||||||
|
etype = etype.__name__
|
||||||
|
pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
|
||||||
|
date = time.ctime(time.time())
|
||||||
|
head = "%s\n%s\n%s\n" % (str(etype), pyver, date) + '''
|
||||||
|
A problem occurred in a Python script. Here is the sequence of
|
||||||
|
function calls leading up to the error, in the order they occurred.
|
||||||
|
'''
|
||||||
|
|
||||||
|
frames = []
|
||||||
|
records = inspect.getinnerframes(etb, context)
|
||||||
|
for frame, file, lnum, func, lines, index in records:
|
||||||
|
file = file and os.path.abspath(file) or '?'
|
||||||
|
args, varargs, varkw, locals = inspect.getargvalues(frame)
|
||||||
|
call = ''
|
||||||
|
if func != '?':
|
||||||
|
call = 'in ' + func + \
|
||||||
|
inspect.formatargvalues(args, varargs, varkw, locals,
|
||||||
|
formatvalue=lambda value: '=' + pydoc.text.repr(value))
|
||||||
|
|
||||||
|
highlight = {}
|
||||||
|
def reader(lnum=[lnum]):
|
||||||
|
highlight[lnum[0]] = 1
|
||||||
|
try: return linecache.getline(file, lnum[0])
|
||||||
|
finally: lnum[0] += 1
|
||||||
|
vars = scanvars(reader, frame, locals)
|
||||||
|
|
||||||
|
rows = [' %s %s' % (file, call)]
|
||||||
|
if index is not None:
|
||||||
|
i = lnum - index
|
||||||
|
for line in lines:
|
||||||
|
num = '%5d ' % i
|
||||||
|
rows.append(num+line.rstrip())
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
done, dump = {}, []
|
||||||
|
for name, where, value in vars:
|
||||||
|
if name in done: continue
|
||||||
|
done[name] = 1
|
||||||
|
if value is not __UNDEF__:
|
||||||
|
if where == 'global': name = 'global ' + name
|
||||||
|
elif where != 'local': name = where + name.split('.')[-1]
|
||||||
|
dump.append('%s = %s' % (name, pydoc.text.repr(value)))
|
||||||
|
else:
|
||||||
|
dump.append(name + ' undefined')
|
||||||
|
|
||||||
|
rows.append('\n'.join(dump))
|
||||||
|
frames.append('\n%s\n' % '\n'.join(rows))
|
||||||
|
|
||||||
|
exception = ['%s: %s' % (str(etype), str(evalue))]
|
||||||
|
for name in dir(evalue):
|
||||||
|
value = pydoc.text.repr(getattr(evalue, name))
|
||||||
|
exception.append('\n%s%s = %s' % (" "*4, name, value))
|
||||||
|
|
||||||
|
return head + ''.join(frames) + ''.join(exception) + '''
|
||||||
|
|
||||||
|
The above is a description of an error in a Python program. Here is
|
||||||
|
the original traceback:
|
||||||
|
|
||||||
|
%s
|
||||||
|
''' % ''.join(traceback.format_exception(etype, evalue, etb))
|
||||||
|
|
||||||
|
class Hook:
|
||||||
|
"""A hook to replace sys.excepthook that shows tracebacks in HTML."""
|
||||||
|
|
||||||
|
def __init__(self, display=1, logdir=None, context=5, file=None,
|
||||||
|
format="html"):
|
||||||
|
self.display = display # send tracebacks to browser if true
|
||||||
|
self.logdir = logdir # log tracebacks to files if not None
|
||||||
|
self.context = context # number of source code lines per frame
|
||||||
|
self.file = file or sys.stdout # place to send the output
|
||||||
|
self.format = format
|
||||||
|
|
||||||
|
def __call__(self, etype, evalue, etb):
|
||||||
|
self.handle((etype, evalue, etb))
|
||||||
|
|
||||||
|
def handle(self, info=None):
|
||||||
|
info = info or sys.exc_info()
|
||||||
|
if self.format == "html":
|
||||||
|
self.file.write(reset())
|
||||||
|
|
||||||
|
formatter = (self.format=="html") and html or text
|
||||||
|
plain = False
|
||||||
|
try:
|
||||||
|
doc = formatter(info, self.context)
|
||||||
|
except: # just in case something goes wrong
|
||||||
|
doc = ''.join(traceback.format_exception(*info))
|
||||||
|
plain = True
|
||||||
|
|
||||||
|
if self.display:
|
||||||
|
if plain:
|
||||||
|
doc = pydoc.html.escape(doc)
|
||||||
|
self.file.write('<pre>' + doc + '</pre>\n')
|
||||||
|
else:
|
||||||
|
self.file.write(doc + '\n')
|
||||||
|
else:
|
||||||
|
self.file.write('<p>A problem occurred in a Python script.\n')
|
||||||
|
|
||||||
|
if self.logdir is not None:
|
||||||
|
suffix = ['.txt', '.html'][self.format=="html"]
|
||||||
|
(fd, path) = tempfile.mkstemp(suffix=suffix, dir=self.logdir)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with os.fdopen(fd, 'w') as file:
|
||||||
|
file.write(doc)
|
||||||
|
msg = '%s contains the description of this error.' % path
|
||||||
|
except:
|
||||||
|
msg = 'Tried to save traceback to %s, but failed.' % path
|
||||||
|
|
||||||
|
if self.format == 'html':
|
||||||
|
self.file.write('<p>%s</p>\n' % msg)
|
||||||
|
else:
|
||||||
|
self.file.write(msg + '\n')
|
||||||
|
try:
|
||||||
|
self.file.flush()
|
||||||
|
except: pass
|
||||||
|
|
||||||
|
handler = Hook().handle
|
||||||
|
def enable(display=1, logdir=None, context=5, format="html"):
|
||||||
|
"""Install an exception handler that formats tracebacks as HTML.
|
||||||
|
|
||||||
|
The optional argument 'display' can be set to 0 to suppress sending the
|
||||||
|
traceback to the browser, and 'logdir' can be set to a directory to cause
|
||||||
|
tracebacks to be written to files there."""
|
||||||
|
sys.excepthook = Hook(display=display, logdir=logdir,
|
||||||
|
context=context, format=format)
|
169
Lib/chunk.py
Normal file
169
Lib/chunk.py
Normal file
|
@ -0,0 +1,169 @@
|
||||||
|
"""Simple class to read IFF chunks.
|
||||||
|
|
||||||
|
An IFF chunk (used in formats such as AIFF, TIFF, RMFF (RealMedia File
|
||||||
|
Format)) has the following structure:
|
||||||
|
|
||||||
|
+----------------+
|
||||||
|
| ID (4 bytes) |
|
||||||
|
+----------------+
|
||||||
|
| size (4 bytes) |
|
||||||
|
+----------------+
|
||||||
|
| data |
|
||||||
|
| ... |
|
||||||
|
+----------------+
|
||||||
|
|
||||||
|
The ID is a 4-byte string which identifies the type of chunk.
|
||||||
|
|
||||||
|
The size field (a 32-bit value, encoded using big-endian byte order)
|
||||||
|
gives the size of the whole chunk, including the 8-byte header.
|
||||||
|
|
||||||
|
Usually an IFF-type file consists of one or more chunks. The proposed
|
||||||
|
usage of the Chunk class defined here is to instantiate an instance at
|
||||||
|
the start of each chunk and read from the instance until it reaches
|
||||||
|
the end, after which a new instance can be instantiated. At the end
|
||||||
|
of the file, creating a new instance will fail with an EOFError
|
||||||
|
exception.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
chunk = Chunk(file)
|
||||||
|
except EOFError:
|
||||||
|
break
|
||||||
|
chunktype = chunk.getname()
|
||||||
|
while True:
|
||||||
|
data = chunk.read(nbytes)
|
||||||
|
if not data:
|
||||||
|
pass
|
||||||
|
# do something with data
|
||||||
|
|
||||||
|
The interface is file-like. The implemented methods are:
|
||||||
|
read, close, seek, tell, isatty.
|
||||||
|
Extra methods are: skip() (called by close, skips to the end of the chunk),
|
||||||
|
getname() (returns the name (ID) of the chunk)
|
||||||
|
|
||||||
|
The __init__ method has one required argument, a file-like object
|
||||||
|
(including a chunk instance), and one optional argument, a flag which
|
||||||
|
specifies whether or not chunks are aligned on 2-byte boundaries. The
|
||||||
|
default is 1, i.e. aligned.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class Chunk:
|
||||||
|
def __init__(self, file, align=True, bigendian=True, inclheader=False):
|
||||||
|
import struct
|
||||||
|
self.closed = False
|
||||||
|
self.align = align # whether to align to word (2-byte) boundaries
|
||||||
|
if bigendian:
|
||||||
|
strflag = '>'
|
||||||
|
else:
|
||||||
|
strflag = '<'
|
||||||
|
self.file = file
|
||||||
|
self.chunkname = file.read(4)
|
||||||
|
if len(self.chunkname) < 4:
|
||||||
|
raise EOFError
|
||||||
|
try:
|
||||||
|
self.chunksize = struct.unpack_from(strflag+'L', file.read(4))[0]
|
||||||
|
except struct.error:
|
||||||
|
raise EOFError from None
|
||||||
|
if inclheader:
|
||||||
|
self.chunksize = self.chunksize - 8 # subtract header
|
||||||
|
self.size_read = 0
|
||||||
|
try:
|
||||||
|
self.offset = self.file.tell()
|
||||||
|
except (AttributeError, OSError):
|
||||||
|
self.seekable = False
|
||||||
|
else:
|
||||||
|
self.seekable = True
|
||||||
|
|
||||||
|
def getname(self):
|
||||||
|
"""Return the name (ID) of the current chunk."""
|
||||||
|
return self.chunkname
|
||||||
|
|
||||||
|
def getsize(self):
|
||||||
|
"""Return the size of the current chunk."""
|
||||||
|
return self.chunksize
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
if not self.closed:
|
||||||
|
try:
|
||||||
|
self.skip()
|
||||||
|
finally:
|
||||||
|
self.closed = True
|
||||||
|
|
||||||
|
def isatty(self):
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def seek(self, pos, whence=0):
|
||||||
|
"""Seek to specified position into the chunk.
|
||||||
|
Default position is 0 (start of chunk).
|
||||||
|
If the file is not seekable, this will result in an error.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
if not self.seekable:
|
||||||
|
raise OSError("cannot seek")
|
||||||
|
if whence == 1:
|
||||||
|
pos = pos + self.size_read
|
||||||
|
elif whence == 2:
|
||||||
|
pos = pos + self.chunksize
|
||||||
|
if pos < 0 or pos > self.chunksize:
|
||||||
|
raise RuntimeError
|
||||||
|
self.file.seek(self.offset + pos, 0)
|
||||||
|
self.size_read = pos
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
return self.size_read
|
||||||
|
|
||||||
|
def read(self, size=-1):
|
||||||
|
"""Read at most size bytes from the chunk.
|
||||||
|
If size is omitted or negative, read until the end
|
||||||
|
of the chunk.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
if self.size_read >= self.chunksize:
|
||||||
|
return b''
|
||||||
|
if size < 0:
|
||||||
|
size = self.chunksize - self.size_read
|
||||||
|
if size > self.chunksize - self.size_read:
|
||||||
|
size = self.chunksize - self.size_read
|
||||||
|
data = self.file.read(size)
|
||||||
|
self.size_read = self.size_read + len(data)
|
||||||
|
if self.size_read == self.chunksize and \
|
||||||
|
self.align and \
|
||||||
|
(self.chunksize & 1):
|
||||||
|
dummy = self.file.read(1)
|
||||||
|
self.size_read = self.size_read + len(dummy)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def skip(self):
|
||||||
|
"""Skip the rest of the chunk.
|
||||||
|
If you are not interested in the contents of the chunk,
|
||||||
|
this method should be called so that the file points to
|
||||||
|
the start of the next chunk.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self.closed:
|
||||||
|
raise ValueError("I/O operation on closed file")
|
||||||
|
if self.seekable:
|
||||||
|
try:
|
||||||
|
n = self.chunksize - self.size_read
|
||||||
|
# maybe fix alignment
|
||||||
|
if self.align and (self.chunksize & 1):
|
||||||
|
n = n + 1
|
||||||
|
self.file.seek(n, 1)
|
||||||
|
self.size_read = self.size_read + n
|
||||||
|
return
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
while self.size_read < self.chunksize:
|
||||||
|
n = min(8192, self.chunksize - self.size_read)
|
||||||
|
dummy = self.read(n)
|
||||||
|
if not dummy:
|
||||||
|
raise EOFError
|
401
Lib/cmd.py
Normal file
401
Lib/cmd.py
Normal file
|
@ -0,0 +1,401 @@
|
||||||
|
"""A generic class to build line-oriented command interpreters.
|
||||||
|
|
||||||
|
Interpreters constructed with this class obey the following conventions:
|
||||||
|
|
||||||
|
1. End of file on input is processed as the command 'EOF'.
|
||||||
|
2. A command is parsed out of each line by collecting the prefix composed
|
||||||
|
of characters in the identchars member.
|
||||||
|
3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method
|
||||||
|
is passed a single argument consisting of the remainder of the line.
|
||||||
|
4. Typing an empty line repeats the last command. (Actually, it calls the
|
||||||
|
method `emptyline', which may be overridden in a subclass.)
|
||||||
|
5. There is a predefined `help' method. Given an argument `topic', it
|
||||||
|
calls the command `help_topic'. With no arguments, it lists all topics
|
||||||
|
with defined help_ functions, broken into up to three topics; documented
|
||||||
|
commands, miscellaneous help topics, and undocumented commands.
|
||||||
|
6. The command '?' is a synonym for `help'. The command '!' is a synonym
|
||||||
|
for `shell', if a do_shell method exists.
|
||||||
|
7. If completion is enabled, completing commands will be done automatically,
|
||||||
|
and completing of commands args is done by calling complete_foo() with
|
||||||
|
arguments text, line, begidx, endidx. text is string we are matching
|
||||||
|
against, all returned matches must begin with it. line is the current
|
||||||
|
input line (lstripped), begidx and endidx are the beginning and end
|
||||||
|
indexes of the text being matched, which could be used to provide
|
||||||
|
different completion depending upon which position the argument is in.
|
||||||
|
|
||||||
|
The `default' method may be overridden to intercept commands for which there
|
||||||
|
is no do_ method.
|
||||||
|
|
||||||
|
The `completedefault' method may be overridden to intercept completions for
|
||||||
|
commands that have no complete_ method.
|
||||||
|
|
||||||
|
The data member `self.ruler' sets the character used to draw separator lines
|
||||||
|
in the help messages. If empty, no ruler line is drawn. It defaults to "=".
|
||||||
|
|
||||||
|
If the value of `self.intro' is nonempty when the cmdloop method is called,
|
||||||
|
it is printed out on interpreter startup. This value may be overridden
|
||||||
|
via an optional argument to the cmdloop() method.
|
||||||
|
|
||||||
|
The data members `self.doc_header', `self.misc_header', and
|
||||||
|
`self.undoc_header' set the headers used for the help function's
|
||||||
|
listings of documented functions, miscellaneous topics, and undocumented
|
||||||
|
functions respectively.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import string, sys
|
||||||
|
|
||||||
|
__all__ = ["Cmd"]
|
||||||
|
|
||||||
|
PROMPT = '(Cmd) '
|
||||||
|
IDENTCHARS = string.ascii_letters + string.digits + '_'
|
||||||
|
|
||||||
|
class Cmd:
|
||||||
|
"""A simple framework for writing line-oriented command interpreters.
|
||||||
|
|
||||||
|
These are often useful for test harnesses, administrative tools, and
|
||||||
|
prototypes that will later be wrapped in a more sophisticated interface.
|
||||||
|
|
||||||
|
A Cmd instance or subclass instance is a line-oriented interpreter
|
||||||
|
framework. There is no good reason to instantiate Cmd itself; rather,
|
||||||
|
it's useful as a superclass of an interpreter class you define yourself
|
||||||
|
in order to inherit Cmd's methods and encapsulate action methods.
|
||||||
|
|
||||||
|
"""
|
||||||
|
prompt = PROMPT
|
||||||
|
identchars = IDENTCHARS
|
||||||
|
ruler = '='
|
||||||
|
lastcmd = ''
|
||||||
|
intro = None
|
||||||
|
doc_leader = ""
|
||||||
|
doc_header = "Documented commands (type help <topic>):"
|
||||||
|
misc_header = "Miscellaneous help topics:"
|
||||||
|
undoc_header = "Undocumented commands:"
|
||||||
|
nohelp = "*** No help on %s"
|
||||||
|
use_rawinput = 1
|
||||||
|
|
||||||
|
def __init__(self, completekey='tab', stdin=None, stdout=None):
|
||||||
|
"""Instantiate a line-oriented interpreter framework.
|
||||||
|
|
||||||
|
The optional argument 'completekey' is the readline name of a
|
||||||
|
completion key; it defaults to the Tab key. If completekey is
|
||||||
|
not None and the readline module is available, command completion
|
||||||
|
is done automatically. The optional arguments stdin and stdout
|
||||||
|
specify alternate input and output file objects; if not specified,
|
||||||
|
sys.stdin and sys.stdout are used.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if stdin is not None:
|
||||||
|
self.stdin = stdin
|
||||||
|
else:
|
||||||
|
self.stdin = sys.stdin
|
||||||
|
if stdout is not None:
|
||||||
|
self.stdout = stdout
|
||||||
|
else:
|
||||||
|
self.stdout = sys.stdout
|
||||||
|
self.cmdqueue = []
|
||||||
|
self.completekey = completekey
|
||||||
|
|
||||||
|
def cmdloop(self, intro=None):
|
||||||
|
"""Repeatedly issue a prompt, accept input, parse an initial prefix
|
||||||
|
off the received input, and dispatch to action methods, passing them
|
||||||
|
the remainder of the line as argument.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
self.preloop()
|
||||||
|
if self.use_rawinput and self.completekey:
|
||||||
|
try:
|
||||||
|
import readline
|
||||||
|
self.old_completer = readline.get_completer()
|
||||||
|
readline.set_completer(self.complete)
|
||||||
|
readline.parse_and_bind(self.completekey+": complete")
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
if intro is not None:
|
||||||
|
self.intro = intro
|
||||||
|
if self.intro:
|
||||||
|
self.stdout.write(str(self.intro)+"\n")
|
||||||
|
stop = None
|
||||||
|
while not stop:
|
||||||
|
if self.cmdqueue:
|
||||||
|
line = self.cmdqueue.pop(0)
|
||||||
|
else:
|
||||||
|
if self.use_rawinput:
|
||||||
|
try:
|
||||||
|
line = input(self.prompt)
|
||||||
|
except EOFError:
|
||||||
|
line = 'EOF'
|
||||||
|
else:
|
||||||
|
self.stdout.write(self.prompt)
|
||||||
|
self.stdout.flush()
|
||||||
|
line = self.stdin.readline()
|
||||||
|
if not len(line):
|
||||||
|
line = 'EOF'
|
||||||
|
else:
|
||||||
|
line = line.rstrip('\r\n')
|
||||||
|
line = self.precmd(line)
|
||||||
|
stop = self.onecmd(line)
|
||||||
|
stop = self.postcmd(stop, line)
|
||||||
|
self.postloop()
|
||||||
|
finally:
|
||||||
|
if self.use_rawinput and self.completekey:
|
||||||
|
try:
|
||||||
|
import readline
|
||||||
|
readline.set_completer(self.old_completer)
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def precmd(self, line):
|
||||||
|
"""Hook method executed just before the command line is
|
||||||
|
interpreted, but after the input prompt is generated and issued.
|
||||||
|
|
||||||
|
"""
|
||||||
|
return line
|
||||||
|
|
||||||
|
def postcmd(self, stop, line):
|
||||||
|
"""Hook method executed just after a command dispatch is finished."""
|
||||||
|
return stop
|
||||||
|
|
||||||
|
def preloop(self):
|
||||||
|
"""Hook method executed once when the cmdloop() method is called."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def postloop(self):
|
||||||
|
"""Hook method executed once when the cmdloop() method is about to
|
||||||
|
return.
|
||||||
|
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def parseline(self, line):
|
||||||
|
"""Parse the line into a command name and a string containing
|
||||||
|
the arguments. Returns a tuple containing (command, args, line).
|
||||||
|
'command' and 'args' may be None if the line couldn't be parsed.
|
||||||
|
"""
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
return None, None, line
|
||||||
|
elif line[0] == '?':
|
||||||
|
line = 'help ' + line[1:]
|
||||||
|
elif line[0] == '!':
|
||||||
|
if hasattr(self, 'do_shell'):
|
||||||
|
line = 'shell ' + line[1:]
|
||||||
|
else:
|
||||||
|
return None, None, line
|
||||||
|
i, n = 0, len(line)
|
||||||
|
while i < n and line[i] in self.identchars: i = i+1
|
||||||
|
cmd, arg = line[:i], line[i:].strip()
|
||||||
|
return cmd, arg, line
|
||||||
|
|
||||||
|
def onecmd(self, line):
|
||||||
|
"""Interpret the argument as though it had been typed in response
|
||||||
|
to the prompt.
|
||||||
|
|
||||||
|
This may be overridden, but should not normally need to be;
|
||||||
|
see the precmd() and postcmd() methods for useful execution hooks.
|
||||||
|
The return value is a flag indicating whether interpretation of
|
||||||
|
commands by the interpreter should stop.
|
||||||
|
|
||||||
|
"""
|
||||||
|
cmd, arg, line = self.parseline(line)
|
||||||
|
if not line:
|
||||||
|
return self.emptyline()
|
||||||
|
if cmd is None:
|
||||||
|
return self.default(line)
|
||||||
|
self.lastcmd = line
|
||||||
|
if line == 'EOF' :
|
||||||
|
self.lastcmd = ''
|
||||||
|
if cmd == '':
|
||||||
|
return self.default(line)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
func = getattr(self, 'do_' + cmd)
|
||||||
|
except AttributeError:
|
||||||
|
return self.default(line)
|
||||||
|
return func(arg)
|
||||||
|
|
||||||
|
def emptyline(self):
|
||||||
|
"""Called when an empty line is entered in response to the prompt.
|
||||||
|
|
||||||
|
If this method is not overridden, it repeats the last nonempty
|
||||||
|
command entered.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if self.lastcmd:
|
||||||
|
return self.onecmd(self.lastcmd)
|
||||||
|
|
||||||
|
def default(self, line):
|
||||||
|
"""Called on an input line when the command prefix is not recognized.
|
||||||
|
|
||||||
|
If this method is not overridden, it prints an error message and
|
||||||
|
returns.
|
||||||
|
|
||||||
|
"""
|
||||||
|
self.stdout.write('*** Unknown syntax: %s\n'%line)
|
||||||
|
|
||||||
|
def completedefault(self, *ignored):
|
||||||
|
"""Method called to complete an input line when no command-specific
|
||||||
|
complete_*() method is available.
|
||||||
|
|
||||||
|
By default, it returns an empty list.
|
||||||
|
|
||||||
|
"""
|
||||||
|
return []
|
||||||
|
|
||||||
|
def completenames(self, text, *ignored):
|
||||||
|
dotext = 'do_'+text
|
||||||
|
return [a[3:] for a in self.get_names() if a.startswith(dotext)]
|
||||||
|
|
||||||
|
def complete(self, text, state):
|
||||||
|
"""Return the next possible completion for 'text'.
|
||||||
|
|
||||||
|
If a command has not been entered, then complete against command list.
|
||||||
|
Otherwise try to call complete_<command> to get list of completions.
|
||||||
|
"""
|
||||||
|
if state == 0:
|
||||||
|
import readline
|
||||||
|
origline = readline.get_line_buffer()
|
||||||
|
line = origline.lstrip()
|
||||||
|
stripped = len(origline) - len(line)
|
||||||
|
begidx = readline.get_begidx() - stripped
|
||||||
|
endidx = readline.get_endidx() - stripped
|
||||||
|
if begidx>0:
|
||||||
|
cmd, args, foo = self.parseline(line)
|
||||||
|
if cmd == '':
|
||||||
|
compfunc = self.completedefault
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
compfunc = getattr(self, 'complete_' + cmd)
|
||||||
|
except AttributeError:
|
||||||
|
compfunc = self.completedefault
|
||||||
|
else:
|
||||||
|
compfunc = self.completenames
|
||||||
|
self.completion_matches = compfunc(text, line, begidx, endidx)
|
||||||
|
try:
|
||||||
|
return self.completion_matches[state]
|
||||||
|
except IndexError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_names(self):
|
||||||
|
# This method used to pull in base class attributes
|
||||||
|
# at a time dir() didn't do it yet.
|
||||||
|
return dir(self.__class__)
|
||||||
|
|
||||||
|
def complete_help(self, *args):
|
||||||
|
commands = set(self.completenames(*args))
|
||||||
|
topics = set(a[5:] for a in self.get_names()
|
||||||
|
if a.startswith('help_' + args[0]))
|
||||||
|
return list(commands | topics)
|
||||||
|
|
||||||
|
def do_help(self, arg):
|
||||||
|
'List available commands with "help" or detailed help with "help cmd".'
|
||||||
|
if arg:
|
||||||
|
# XXX check arg syntax
|
||||||
|
try:
|
||||||
|
func = getattr(self, 'help_' + arg)
|
||||||
|
except AttributeError:
|
||||||
|
try:
|
||||||
|
doc=getattr(self, 'do_' + arg).__doc__
|
||||||
|
if doc:
|
||||||
|
self.stdout.write("%s\n"%str(doc))
|
||||||
|
return
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
self.stdout.write("%s\n"%str(self.nohelp % (arg,)))
|
||||||
|
return
|
||||||
|
func()
|
||||||
|
else:
|
||||||
|
names = self.get_names()
|
||||||
|
cmds_doc = []
|
||||||
|
cmds_undoc = []
|
||||||
|
help = {}
|
||||||
|
for name in names:
|
||||||
|
if name[:5] == 'help_':
|
||||||
|
help[name[5:]]=1
|
||||||
|
names.sort()
|
||||||
|
# There can be duplicates if routines overridden
|
||||||
|
prevname = ''
|
||||||
|
for name in names:
|
||||||
|
if name[:3] == 'do_':
|
||||||
|
if name == prevname:
|
||||||
|
continue
|
||||||
|
prevname = name
|
||||||
|
cmd=name[3:]
|
||||||
|
if cmd in help:
|
||||||
|
cmds_doc.append(cmd)
|
||||||
|
del help[cmd]
|
||||||
|
elif getattr(self, name).__doc__:
|
||||||
|
cmds_doc.append(cmd)
|
||||||
|
else:
|
||||||
|
cmds_undoc.append(cmd)
|
||||||
|
self.stdout.write("%s\n"%str(self.doc_leader))
|
||||||
|
self.print_topics(self.doc_header, cmds_doc, 15,80)
|
||||||
|
self.print_topics(self.misc_header, list(help.keys()),15,80)
|
||||||
|
self.print_topics(self.undoc_header, cmds_undoc, 15,80)
|
||||||
|
|
||||||
|
def print_topics(self, header, cmds, cmdlen, maxcol):
|
||||||
|
if cmds:
|
||||||
|
self.stdout.write("%s\n"%str(header))
|
||||||
|
if self.ruler:
|
||||||
|
self.stdout.write("%s\n"%str(self.ruler * len(header)))
|
||||||
|
self.columnize(cmds, maxcol-1)
|
||||||
|
self.stdout.write("\n")
|
||||||
|
|
||||||
|
def columnize(self, list, displaywidth=80):
|
||||||
|
"""Display a list of strings as a compact set of columns.
|
||||||
|
|
||||||
|
Each column is only as wide as necessary.
|
||||||
|
Columns are separated by two spaces (one was not legible enough).
|
||||||
|
"""
|
||||||
|
if not list:
|
||||||
|
self.stdout.write("<empty>\n")
|
||||||
|
return
|
||||||
|
|
||||||
|
nonstrings = [i for i in range(len(list))
|
||||||
|
if not isinstance(list[i], str)]
|
||||||
|
if nonstrings:
|
||||||
|
raise TypeError("list[i] not a string for i in %s"
|
||||||
|
% ", ".join(map(str, nonstrings)))
|
||||||
|
size = len(list)
|
||||||
|
if size == 1:
|
||||||
|
self.stdout.write('%s\n'%str(list[0]))
|
||||||
|
return
|
||||||
|
# Try every row count from 1 upwards
|
||||||
|
for nrows in range(1, len(list)):
|
||||||
|
ncols = (size+nrows-1) // nrows
|
||||||
|
colwidths = []
|
||||||
|
totwidth = -2
|
||||||
|
for col in range(ncols):
|
||||||
|
colwidth = 0
|
||||||
|
for row in range(nrows):
|
||||||
|
i = row + nrows*col
|
||||||
|
if i >= size:
|
||||||
|
break
|
||||||
|
x = list[i]
|
||||||
|
colwidth = max(colwidth, len(x))
|
||||||
|
colwidths.append(colwidth)
|
||||||
|
totwidth += colwidth + 2
|
||||||
|
if totwidth > displaywidth:
|
||||||
|
break
|
||||||
|
if totwidth <= displaywidth:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
nrows = len(list)
|
||||||
|
ncols = 1
|
||||||
|
colwidths = [0]
|
||||||
|
for row in range(nrows):
|
||||||
|
texts = []
|
||||||
|
for col in range(ncols):
|
||||||
|
i = row + nrows*col
|
||||||
|
if i >= size:
|
||||||
|
x = ""
|
||||||
|
else:
|
||||||
|
x = list[i]
|
||||||
|
texts.append(x)
|
||||||
|
while texts and not texts[-1]:
|
||||||
|
del texts[-1]
|
||||||
|
for col in range(len(texts)):
|
||||||
|
texts[col] = texts[col].ljust(colwidths[col])
|
||||||
|
self.stdout.write("%s\n"%str(" ".join(texts)))
|
315
Lib/code.py
Normal file
315
Lib/code.py
Normal file
|
@ -0,0 +1,315 @@
|
||||||
|
"""Utilities needed to emulate Python's interactive interpreter.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Inspired by similar code by Jeff Epler and Fredrik Lundh.
|
||||||
|
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import traceback
|
||||||
|
from codeop import CommandCompiler, compile_command
|
||||||
|
|
||||||
|
__all__ = ["InteractiveInterpreter", "InteractiveConsole", "interact",
|
||||||
|
"compile_command"]
|
||||||
|
|
||||||
|
class InteractiveInterpreter:
|
||||||
|
"""Base class for InteractiveConsole.
|
||||||
|
|
||||||
|
This class deals with parsing and interpreter state (the user's
|
||||||
|
namespace); it doesn't deal with input buffering or prompting or
|
||||||
|
input file naming (the filename is always passed in explicitly).
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, locals=None):
|
||||||
|
"""Constructor.
|
||||||
|
|
||||||
|
The optional 'locals' argument specifies the dictionary in
|
||||||
|
which code will be executed; it defaults to a newly created
|
||||||
|
dictionary with key "__name__" set to "__console__" and key
|
||||||
|
"__doc__" set to None.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if locals is None:
|
||||||
|
locals = {"__name__": "__console__", "__doc__": None}
|
||||||
|
self.locals = locals
|
||||||
|
self.compile = CommandCompiler()
|
||||||
|
|
||||||
|
def runsource(self, source, filename="<input>", symbol="single"):
|
||||||
|
"""Compile and run some source in the interpreter.
|
||||||
|
|
||||||
|
Arguments are as for compile_command().
|
||||||
|
|
||||||
|
One several things can happen:
|
||||||
|
|
||||||
|
1) The input is incorrect; compile_command() raised an
|
||||||
|
exception (SyntaxError or OverflowError). A syntax traceback
|
||||||
|
will be printed by calling the showsyntaxerror() method.
|
||||||
|
|
||||||
|
2) The input is incomplete, and more input is required;
|
||||||
|
compile_command() returned None. Nothing happens.
|
||||||
|
|
||||||
|
3) The input is complete; compile_command() returned a code
|
||||||
|
object. The code is executed by calling self.runcode() (which
|
||||||
|
also handles run-time exceptions, except for SystemExit).
|
||||||
|
|
||||||
|
The return value is True in case 2, False in the other cases (unless
|
||||||
|
an exception is raised). The return value can be used to
|
||||||
|
decide whether to use sys.ps1 or sys.ps2 to prompt the next
|
||||||
|
line.
|
||||||
|
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
code = self.compile(source, filename, symbol)
|
||||||
|
except (OverflowError, SyntaxError, ValueError):
|
||||||
|
# Case 1
|
||||||
|
self.showsyntaxerror(filename)
|
||||||
|
return False
|
||||||
|
|
||||||
|
if code is None:
|
||||||
|
# Case 2
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Case 3
|
||||||
|
self.runcode(code)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def runcode(self, code):
|
||||||
|
"""Execute a code object.
|
||||||
|
|
||||||
|
When an exception occurs, self.showtraceback() is called to
|
||||||
|
display a traceback. All exceptions are caught except
|
||||||
|
SystemExit, which is reraised.
|
||||||
|
|
||||||
|
A note about KeyboardInterrupt: this exception may occur
|
||||||
|
elsewhere in this code, and may not always be caught. The
|
||||||
|
caller should be prepared to deal with it.
|
||||||
|
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
exec(code, self.locals)
|
||||||
|
except SystemExit:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
self.showtraceback()
|
||||||
|
|
||||||
|
def showsyntaxerror(self, filename=None):
|
||||||
|
"""Display the syntax error that just occurred.
|
||||||
|
|
||||||
|
This doesn't display a stack trace because there isn't one.
|
||||||
|
|
||||||
|
If a filename is given, it is stuffed in the exception instead
|
||||||
|
of what was there before (because Python's parser always uses
|
||||||
|
"<string>" when reading from a string).
|
||||||
|
|
||||||
|
The output is written by self.write(), below.
|
||||||
|
|
||||||
|
"""
|
||||||
|
type, value, tb = sys.exc_info()
|
||||||
|
sys.last_type = type
|
||||||
|
sys.last_value = value
|
||||||
|
sys.last_traceback = tb
|
||||||
|
if filename and type is SyntaxError:
|
||||||
|
# Work hard to stuff the correct filename in the exception
|
||||||
|
try:
|
||||||
|
msg, (dummy_filename, lineno, offset, line) = value.args
|
||||||
|
except ValueError:
|
||||||
|
# Not the format we expect; leave it alone
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# Stuff in the right filename
|
||||||
|
value = SyntaxError(msg, (filename, lineno, offset, line))
|
||||||
|
sys.last_value = value
|
||||||
|
if sys.excepthook is sys.__excepthook__:
|
||||||
|
lines = traceback.format_exception_only(type, value)
|
||||||
|
self.write(''.join(lines))
|
||||||
|
else:
|
||||||
|
# If someone has set sys.excepthook, we let that take precedence
|
||||||
|
# over self.write
|
||||||
|
sys.excepthook(type, value, tb)
|
||||||
|
|
||||||
|
def showtraceback(self):
|
||||||
|
"""Display the exception that just occurred.
|
||||||
|
|
||||||
|
We remove the first stack item because it is our own code.
|
||||||
|
|
||||||
|
The output is written by self.write(), below.
|
||||||
|
|
||||||
|
"""
|
||||||
|
sys.last_type, sys.last_value, last_tb = ei = sys.exc_info()
|
||||||
|
sys.last_traceback = last_tb
|
||||||
|
try:
|
||||||
|
lines = traceback.format_exception(ei[0], ei[1], last_tb.tb_next)
|
||||||
|
if sys.excepthook is sys.__excepthook__:
|
||||||
|
self.write(''.join(lines))
|
||||||
|
else:
|
||||||
|
# If someone has set sys.excepthook, we let that take precedence
|
||||||
|
# over self.write
|
||||||
|
sys.excepthook(ei[0], ei[1], last_tb)
|
||||||
|
finally:
|
||||||
|
last_tb = ei = None
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
"""Write a string.
|
||||||
|
|
||||||
|
The base implementation writes to sys.stderr; a subclass may
|
||||||
|
replace this with a different implementation.
|
||||||
|
|
||||||
|
"""
|
||||||
|
sys.stderr.write(data)
|
||||||
|
|
||||||
|
|
||||||
|
class InteractiveConsole(InteractiveInterpreter):
|
||||||
|
"""Closely emulate the behavior of the interactive Python interpreter.
|
||||||
|
|
||||||
|
This class builds on InteractiveInterpreter and adds prompting
|
||||||
|
using the familiar sys.ps1 and sys.ps2, and input buffering.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, locals=None, filename="<console>"):
|
||||||
|
"""Constructor.
|
||||||
|
|
||||||
|
The optional locals argument will be passed to the
|
||||||
|
InteractiveInterpreter base class.
|
||||||
|
|
||||||
|
The optional filename argument should specify the (file)name
|
||||||
|
of the input stream; it will show up in tracebacks.
|
||||||
|
|
||||||
|
"""
|
||||||
|
InteractiveInterpreter.__init__(self, locals)
|
||||||
|
self.filename = filename
|
||||||
|
self.resetbuffer()
|
||||||
|
|
||||||
|
def resetbuffer(self):
|
||||||
|
"""Reset the input buffer."""
|
||||||
|
self.buffer = []
|
||||||
|
|
||||||
|
def interact(self, banner=None, exitmsg=None):
|
||||||
|
"""Closely emulate the interactive Python console.
|
||||||
|
|
||||||
|
The optional banner argument specifies the banner to print
|
||||||
|
before the first interaction; by default it prints a banner
|
||||||
|
similar to the one printed by the real Python interpreter,
|
||||||
|
followed by the current class name in parentheses (so as not
|
||||||
|
to confuse this with the real interpreter -- since it's so
|
||||||
|
close!).
|
||||||
|
|
||||||
|
The optional exitmsg argument specifies the exit message
|
||||||
|
printed when exiting. Pass the empty string to suppress
|
||||||
|
printing an exit message. If exitmsg is not given or None,
|
||||||
|
a default message is printed.
|
||||||
|
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
sys.ps1
|
||||||
|
except AttributeError:
|
||||||
|
sys.ps1 = ">>> "
|
||||||
|
try:
|
||||||
|
sys.ps2
|
||||||
|
except AttributeError:
|
||||||
|
sys.ps2 = "... "
|
||||||
|
cprt = 'Type "help", "copyright", "credits" or "license" for more information.'
|
||||||
|
if banner is None:
|
||||||
|
self.write("Python %s on %s\n%s\n(%s)\n" %
|
||||||
|
(sys.version, sys.platform, cprt,
|
||||||
|
self.__class__.__name__))
|
||||||
|
elif banner:
|
||||||
|
self.write("%s\n" % str(banner))
|
||||||
|
more = 0
|
||||||
|
while 1:
|
||||||
|
try:
|
||||||
|
if more:
|
||||||
|
prompt = sys.ps2
|
||||||
|
else:
|
||||||
|
prompt = sys.ps1
|
||||||
|
try:
|
||||||
|
line = self.raw_input(prompt)
|
||||||
|
except EOFError:
|
||||||
|
self.write("\n")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
more = self.push(line)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
self.write("\nKeyboardInterrupt\n")
|
||||||
|
self.resetbuffer()
|
||||||
|
more = 0
|
||||||
|
if exitmsg is None:
|
||||||
|
self.write('now exiting %s...\n' % self.__class__.__name__)
|
||||||
|
elif exitmsg != '':
|
||||||
|
self.write('%s\n' % exitmsg)
|
||||||
|
|
||||||
|
def push(self, line):
|
||||||
|
"""Push a line to the interpreter.
|
||||||
|
|
||||||
|
The line should not have a trailing newline; it may have
|
||||||
|
internal newlines. The line is appended to a buffer and the
|
||||||
|
interpreter's runsource() method is called with the
|
||||||
|
concatenated contents of the buffer as source. If this
|
||||||
|
indicates that the command was executed or invalid, the buffer
|
||||||
|
is reset; otherwise, the command is incomplete, and the buffer
|
||||||
|
is left as it was after the line was appended. The return
|
||||||
|
value is 1 if more input is required, 0 if the line was dealt
|
||||||
|
with in some way (this is the same as runsource()).
|
||||||
|
|
||||||
|
"""
|
||||||
|
self.buffer.append(line)
|
||||||
|
source = "\n".join(self.buffer)
|
||||||
|
more = self.runsource(source, self.filename)
|
||||||
|
if not more:
|
||||||
|
self.resetbuffer()
|
||||||
|
return more
|
||||||
|
|
||||||
|
def raw_input(self, prompt=""):
|
||||||
|
"""Write a prompt and read a line.
|
||||||
|
|
||||||
|
The returned line does not include the trailing newline.
|
||||||
|
When the user enters the EOF key sequence, EOFError is raised.
|
||||||
|
|
||||||
|
The base implementation uses the built-in function
|
||||||
|
input(); a subclass may replace this with a different
|
||||||
|
implementation.
|
||||||
|
|
||||||
|
"""
|
||||||
|
return input(prompt)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def interact(banner=None, readfunc=None, local=None, exitmsg=None):
|
||||||
|
"""Closely emulate the interactive Python interpreter.
|
||||||
|
|
||||||
|
This is a backwards compatible interface to the InteractiveConsole
|
||||||
|
class. When readfunc is not specified, it attempts to import the
|
||||||
|
readline module to enable GNU readline if it is available.
|
||||||
|
|
||||||
|
Arguments (all optional, all default to None):
|
||||||
|
|
||||||
|
banner -- passed to InteractiveConsole.interact()
|
||||||
|
readfunc -- if not None, replaces InteractiveConsole.raw_input()
|
||||||
|
local -- passed to InteractiveInterpreter.__init__()
|
||||||
|
exitmsg -- passed to InteractiveConsole.interact()
|
||||||
|
|
||||||
|
"""
|
||||||
|
console = InteractiveConsole(local)
|
||||||
|
if readfunc is not None:
|
||||||
|
console.raw_input = readfunc
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
import readline
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
console.interact(banner, exitmsg)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument('-q', action='store_true',
|
||||||
|
help="don't print version and copyright messages")
|
||||||
|
args = parser.parse_args()
|
||||||
|
if args.q or sys.flags.quiet:
|
||||||
|
banner = ''
|
||||||
|
else:
|
||||||
|
banner = None
|
||||||
|
interact(banner)
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue