Web Development int64 0 1 | Data Science and Machine Learning int64 0 1 | Question stringlengths 28 6.1k | is_accepted bool 2
classes | Q_Id int64 337 51.9M | Score float64 -1 1.2 | Other int64 0 1 | Database and SQL int64 0 1 | Users Score int64 -8 412 | Answer stringlengths 14 7k | Python Basics and Environment int64 0 1 | ViewCount int64 13 1.34M | System Administration and DevOps int64 0 1 | Q_Score int64 0 1.53k | CreationDate stringlengths 23 23 | Tags stringlengths 6 90 | Title stringlengths 15 149 | Networking and APIs int64 1 1 | Available Count int64 1 12 | AnswerCount int64 1 28 | A_Id int64 635 72.5M | GUI and Desktop Applications int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists.
Any suggestions? | false | 979,551 | -0.066568 | 0 | 0 | -1 | Use the binaries provided by python.org or by your OS distributor. It's a lot easier than building it yourself, and all the features are usually compiled in.
If you really need to build it yourself, you'll need to provide more information here about what build options you provided, what your environment is like, and p... | 0 | 7,646 | 0 | 2 | 2009-06-11T05:56:00.000 | python,ssl,openssl | Adding SSL support to Python 2.6 | 1 | 2 | 3 | 996,622 | 0 |
0 | 0 | I have a Pythonic HTTP server that is supposed to determine client's IP. How do I do that in Python? Is there any way to get the request headers and extract it from there?
PS: I'm using WebPy. | false | 979,599 | 0.291313 | 0 | 0 | 3 | web.env.get('REMOTE_ADDR') | 0 | 1,464 | 0 | 3 | 2009-06-11T06:16:00.000 | python,http,header,request,ip | Extracting IP from request in Python | 1 | 1 | 2 | 979,637 | 0 |
0 | 0 | I have installed lxml which was built using a standalone version of libxml2. Reason for this was that the lxml needed a later version of libxml2 than what was currently installed.
When I use the lxml module how do I tell it (python) where to find the correct version of the libxml2 shared library? | true | 985,155 | 1.2 | 0 | 0 | 5 | Assuming you're talking about a .so file, it's not up to Python to find it -- it's up to the operating system's dynamic library loaded. For Linux, for example, LD_LIBRARY_PATH is the environment variable you need to set. | 1 | 2,097 | 0 | 5 | 2009-06-12T05:28:00.000 | python | How to specify native library search path for python | 1 | 1 | 1 | 985,176 | 0 |
1 | 0 | I have a django application hosted on webfaction which now has a static/private ip.
Our network in the office is obviously behind a firewall and the AD server is running behind this firewall. From inside the network i can authenticate using python-ldap with the AD's internal IP address and the port 389 and all works we... | false | 990,459 | 0 | 0 | 0 | 0 | There are quite a few components between your hosted django application and your internal AD. You will need to test each to see if everything in the pathways between them is correct.
So your AD server is sitting behind your firewall. Your firewall has ip "a.b.c.d" and all traffic to the firewall ip on port 389 is forwa... | 0 | 2,571 | 0 | 0 | 2009-06-13T10:09:00.000 | python,active-directory,ldap,webserver | Python LDAP Authentication from remote web server | 1 | 1 | 2 | 991,550 | 0 |
1 | 0 | I want to use mechanize with python to get all the links of the page, and then open the links.How can I do it? | false | 1,011,975 | 0.197375 | 0 | 0 | 2 | The Browser object in mechanize has a links method that will retrieve all the links on the page. | 0 | 8,097 | 0 | 3 | 2009-06-18T10:32:00.000 | python,mechanize | How to get links on a webpage using mechanize and open those links | 1 | 1 | 2 | 1,012,022 | 0 |
0 | 0 | I'm trying to use python to sftp a file, and the code works great in the interactive shell -- even pasting it in all at once.
When I try to import the file (just to compile it), the code hangs with no exceptions or obvious errors.
How do I get the code to compile, or does someone have working code that accomplishes ... | true | 1,013,064 | 1.2 | 1 | 0 | 0 | Weirdness aside, I was just using import to compile the code. Turning the script into a function seems like an unnecessary complication for this kind of application.
Searched for alternate means to compile and found:
import py_compile
py_compile.compile("ProblemDemo.py")
This generated a pyc file that works as inte... | 0 | 6,518 | 0 | 3 | 2009-06-18T14:45:00.000 | python,shell,compilation,sftp | Why does this python code hang on import/compile but work in the shell? | 1 | 1 | 3 | 1,013,366 | 0 |
0 | 0 | I have formatted text (with newlines, tabs, etc.) coming in from a Telnet connection. I have a python script that manages the Telnet connection and embeds the Telnet response in XML that then gets passed through an XSLT transform. How do I pass that XML through the transform without losing the original formatting? I... | false | 1,015,816 | 0 | 0 | 0 | 0 | Data stored in XML comes out the same way it goes in. So if you store the text in an element, no whitespace and newlines are lost unless you tamper with the data in the XSLT.
Enclosing the text in CDATA is unnecessary unless there is some formatting that is invalid in XML (pointy brackets, ampersands, quotes) and you ... | 0 | 223 | 0 | 0 | 2009-06-19T00:02:00.000 | python,xslt | Passing Formatted Text Through XSLT | 1 | 1 | 2 | 1,016,919 | 0 |
0 | 0 | I'm trying to build some statistics for an email group I participate. Is there any Python API to access the email data on a GoogleGroup?
Also, I know some statistics are available on the group's main page. I'm looking for something more complex than what is shown there. | true | 1,017,794 | 1.2 | 1 | 0 | 3 | There isn't an API that I know of, however you can access the XML feed and manipulate it as required. | 0 | 1,163 | 0 | 6 | 2009-06-19T12:48:00.000 | python,google-groups | Is there an API to access a Google Group data? | 1 | 1 | 1 | 1,017,810 | 0 |
1 | 0 | I am tired of clicking "File" and then "Save Page As" in Firefox when I want to save some websites.
Is there any script to do this in Python? I would like to save the pictures and css files so that when I read it offline, it looks normal. | false | 1,035,825 | 0.039979 | 0 | 0 | 1 | Like Cobbal stated, this is largely what wget is designed to do. I believe there's some flags/arguments that you can set to make it download the entire page, CSS + all. I suggest just alias-ing into something more convenient to type, or tossing it into a quick script. | 0 | 2,633 | 0 | 3 | 2009-06-23T23:40:00.000 | python | Any Python Script to Save Websites Like Firefox? | 1 | 1 | 5 | 1,035,855 | 0 |
1 | 0 | I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how... | false | 1,036,660 | 0 | 0 | 0 | 0 | Well obviously python won't interpret the JS for you (though there may be modules out there that can). I suppose you need to convert the JS instructions to equivalent transformations in Python.
I suppose ElementTree or BeautifulSoup would be good starting points to interpret the HTML structure. | 0 | 6,042 | 0 | 3 | 2009-06-24T06:15:00.000 | python,html,bots | Python Web-based Bot | 1 | 2 | 7 | 1,036,758 | 0 |
1 | 0 | I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how... | false | 1,036,660 | 0 | 0 | 0 | 0 | Why don't you just sniff what gets sent after the onclick event and replicate that with your bot? | 0 | 6,042 | 0 | 3 | 2009-06-24T06:15:00.000 | python,html,bots | Python Web-based Bot | 1 | 2 | 7 | 5,873,989 | 0 |
0 | 0 | What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files).
At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one.
Would some type of concurrency help? PyCurl.CurlMulti object?
I ... | false | 1,051,275 | 0.066568 | 0 | 0 | 2 | I don't know anything about python, but in general you would want to break the task down into smaller chunks so that they can be run concurrently. You could break it down by file type, or alphabetical or something, and then run a separate script for each portion of the break down. | 0 | 3,744 | 0 | 3 | 2009-06-26T21:02:00.000 | python,curl,amazon-s3,amazon-web-services,boto | Downloading a Large Number of Files from S3 | 1 | 2 | 6 | 1,051,338 | 0 |
0 | 0 | What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files).
At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one.
Would some type of concurrency help? PyCurl.CurlMulti object?
I ... | false | 1,051,275 | 0 | 0 | 0 | 0 | I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction).
Twisted makes it easy to run at whatever concurrency you want. For something on the ord... | 0 | 3,744 | 0 | 3 | 2009-06-26T21:02:00.000 | python,curl,amazon-s3,amazon-web-services,boto | Downloading a Large Number of Files from S3 | 1 | 2 | 6 | 1,051,408 | 0 |
0 | 0 | I need to write a script that connects to a bunch of sites on our corporate intranet over HTTPS and verifies that their SSL certificates are valid; that they are not expired, that they are issued for the correct address, etc. We use our own internal corporate Certificate Authority for these sites, so we have the publi... | false | 1,087,227 | -0.01818 | 0 | 0 | -1 | I was having the same problem but wanted to minimize 3rd party dependencies (because this one-off script was to be executed by many users). My solution was to wrap a curl call and make sure that the exit code was 0. Worked like a charm. | 0 | 206,011 | 0 | 87 | 2009-07-06T14:17:00.000 | python | Validate SSL certificates with Python | 1 | 1 | 11 | 20,517,707 | 0 |
0 | 0 | I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux. | false | 1,092,379 | 0.049958 | 0 | 0 | 1 | All you can access is what the user sends to you.
MAC address is not part of that data. | 0 | 7,015 | 1 | 2 | 2009-07-07T13:35:00.000 | python,python-3.x | want to get mac address of remote PC | 1 | 1 | 4 | 1,092,392 | 0 |
0 | 0 | I am writing an application to test a network driver for handling corrupted data. And I thought of sending this data using raw socket, so it will not be corrected by the sending machine's TCP-IP stack.
I am writing this application solely on Linux. I have code examples of using raw sockets in system-calls, but I would ... | false | 1,117,958 | 0.049958 | 0 | 0 | 2 | Eventually the best solution for this case was to write the entire thing in C, because it's not a big application, so it would've incurred greater penalty to write such a small thing in more than 1 language.
After much toying with both the C and python RAW sockets, I eventually preferred the C RAW sockets. RAW sockets ... | 0 | 109,072 | 1 | 46 | 2009-07-13T06:36:00.000 | python,sockets,raw-sockets | How Do I Use Raw Socket in Python? | 1 | 1 | 8 | 1,186,810 | 0 |
0 | 0 | To be more specific, I'm using python and making a pool of HTTPConnection (httplib) and was wondering if there is an limit on the number of concurrent HTTP connections on a windows server. | false | 1,121,951 | 0.291313 | 0 | 0 | 3 | AFAIK, the numbers of internet sockets (necessary to make TCP/IP connections) is naturally limited on every machine, but it's pretty high. 1000 simulatneous connections shouldn't be a problem for the client machine, as each socket uses only little memory. If you start receiving data through all these channels, this mig... | 0 | 4,587 | 0 | 4 | 2009-07-13T20:48:00.000 | python | What is the maximum simultaneous HTTP connections allowed on one machine (windows server 2008) using python | 1 | 1 | 2 | 1,122,107 | 0 |
0 | 0 | I can't run firefox from a sudoed python script that drops privileges to normal user. If i write
$ sudo python
>>> import os
>>> import pwd, grp
>>> uid = pwd.getpwnam('norby')[2]
>>> gid = grp.getgrnam('norby')[2]
>>> os.setegid(gid)
>>> os.seteuid(uid)
>>> import webbrowser
>>> webbrowser.get('firefox').open('www.go... | true | 1,139,835 | 1.2 | 0 | 0 | 1 | This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth ch... | 0 | 903 | 1 | 0 | 2009-07-16T19:38:00.000 | python,browser,debian,uid | Python fails to execute firefox webbrowser from a root executed script with privileges drop | 1 | 1 | 1 | 1,140,199 | 0 |
0 | 0 | I’m looking for a quick way to get an HTTP response code from a URL (i.e. 200, 404, etc). I’m not sure which library to use. | false | 1,140,661 | 0.07486 | 0 | 0 | 3 | The urllib2.HTTPError exception does not contain a getcode() method. Use the code attribute instead. | 0 | 153,420 | 0 | 90 | 2009-07-16T22:27:00.000 | python | What’s the best way to get an HTTP response code from a URL? | 1 | 1 | 8 | 1,491,225 | 0 |
0 | 0 | I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?
Thanks! | false | 1,145,286 | -0.028564 | 0 | 0 | -1 | Its a time to buy a new hard drive!
You can make backup before trying all other answers and don't get data lost :) | 0 | 1,833 | 0 | 8 | 2009-07-17T19:41:00.000 | python,file | Change python file in place | 1 | 3 | 7 | 1,148,604 | 0 |
0 | 0 | I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?
Thanks! | false | 1,145,286 | 0.028564 | 0 | 0 | 1 | I'm pretty sure there is, as I've even been able to edit/read from the source files of scripts I've run, but the biggest problem would probably be all the shifting that would be done if you started at the beginning of the file. On the other hand, if you go through the file and record all the starting positions of the l... | 0 | 1,833 | 0 | 8 | 2009-07-17T19:41:00.000 | python,file | Change python file in place | 1 | 3 | 7 | 1,145,329 | 0 |
0 | 0 | I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?
Thanks! | false | 1,145,286 | 0 | 0 | 0 | 0 | If time is not a major factor (or wear and tear on your disk drive):
Open handle to file
Read up to the size of your partition / logical break point (due to the xml)
Save the rest of your file to disk (not sure how python handles this as far as directly overwriting file or memory usage)
Write the partition to disk
got... | 0 | 1,833 | 0 | 8 | 2009-07-17T19:41:00.000 | python,file | Change python file in place | 1 | 3 | 7 | 1,145,341 | 0 |
0 | 0 | I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has "slight" errors - namely tag mismatching.
Is there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is the... | false | 1,147,090 | 0 | 0 | 0 | 0 | If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives.
This is a snippet from a demo mockup to do with screen scraping from tfl's journey planner:
private Document getRoutePage(HashMap params) throws Exception {
... | 0 | 778 | 0 | 0 | 2009-07-18T09:24:00.000 | python,xml,dom,expat-parser | Python xml.dom and bad XML | 1 | 1 | 4 | 1,149,208 | 0 |
1 | 0 | I have a dilemma where I want to create an application that manipulates google contacts information. The problem comes down to the fact that Python only supports version 1.0 of the api whilst Java supports 3.0.
I also want it to be web-based so I'm having a look at google app engine, but it seems that only the python v... | true | 1,148,165 | 1.2 | 0 | 0 | 0 | I'm having a look into the google data api protocol which seems to solve the problem. | 0 | 802 | 0 | 0 | 2009-07-18T18:08:00.000 | java,python,google-app-engine,gdata-api | Possible to access gdata api when using Java App Engine? | 1 | 1 | 3 | 1,149,886 | 0 |
0 | 0 | I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler? | false | 1,170,744 | 0.197375 | 0 | 0 | 2 | This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.
The only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a s... | 0 | 3,899 | 0 | 19 | 2009-07-23T09:56:00.000 | python,http,logging,urllib2 | How do I get urllib2 to log ALL transferred bytes | 1 | 1 | 2 | 1,844,608 | 0 |
0 | 0 | This only needs to work on a single subnet and is not for malicious use.
I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools pro... | false | 1,180,878 | 1 | 0 | 0 | 7 | Quick note, as I just learned this yesterday:
I think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all route... | 0 | 37,293 | 0 | 27 | 2009-07-25T01:11:00.000 | python,http,networking,sockets,urllib2 | Spoofing the origination IP address of an HTTP request | 1 | 2 | 5 | 1,180,897 | 0 |
0 | 0 | This only needs to work on a single subnet and is not for malicious use.
I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools pro... | false | 1,180,878 | 0.039979 | 0 | 0 | 1 | I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability.
If you can't do that, then I suggest that you prob... | 0 | 37,293 | 0 | 27 | 2009-07-25T01:11:00.000 | python,http,networking,sockets,urllib2 | Spoofing the origination IP address of an HTTP request | 1 | 2 | 5 | 1,186,102 | 0 |
0 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connectio... | false | 1,185,855 | 0.033321 | 1 | 0 | 1 | You can simply use subprocess.Popen for that purpose, without any problems.
However, you might want to simply install cronjobs on the remote machines. :-) | 0 | 7,048 | 1 | 3 | 2009-07-26T23:19:00.000 | python,ssh,parallel-processing | Parallel SSH in Python | 1 | 4 | 6 | 1,185,871 | 0 |
0 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connectio... | false | 1,185,855 | 0.033321 | 1 | 0 | 1 | Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open. | 0 | 7,048 | 1 | 3 | 2009-07-26T23:19:00.000 | python,ssh,parallel-processing | Parallel SSH in Python | 1 | 4 | 6 | 1,185,880 | 0 |
0 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connectio... | false | 1,185,855 | 0.099668 | 1 | 0 | 3 | Yes, you can do this with paramiko.
If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing mod... | 0 | 7,048 | 1 | 3 | 2009-07-26T23:19:00.000 | python,ssh,parallel-processing | Parallel SSH in Python | 1 | 4 | 6 | 1,188,586 | 0 |
0 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connectio... | false | 1,185,855 | -0.033321 | 1 | 0 | -1 | This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too. | 0 | 7,048 | 1 | 3 | 2009-07-26T23:19:00.000 | python,ssh,parallel-processing | Parallel SSH in Python | 1 | 4 | 6 | 1,516,547 | 0 |
0 | 0 | I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application.
I'm most comfortable with Python/PHP. | false | 1,186,839 | 0.066568 | 1 | 0 | 1 | I like the examples in the Richardson & Ruby book, "RESTful Web Services" from O'Reilly. | 0 | 499 | 0 | 0 | 2009-07-27T07:11:00.000 | php,python,xml,rest,soap | Real world guide on using and/or setting up REST web services? | 1 | 1 | 3 | 1,186,876 | 0 |
0 | 0 | I have a web service that accepts passed in params using http POST but in a specific order, eg (name,password,data). I have tried to use httplib but all the Python http POST libraries seem to take a dictionary, which is an unordered data structure. Any thoughts on how to http POST params in order for Python?
Thanks! | true | 1,188,737 | 1.2 | 0 | 0 | 2 | Why would you need a specific order in the POST parameters in the first place? As far as I know there are no requirements that POST parameter order is preserved by web servers.
Every language I have used, has used a dictionary type object to hold these parameters as they are inherently key/value pairs. | 1 | 345 | 0 | 2 | 2009-07-27T15:11:00.000 | python,http | Python POST ordered params | 1 | 1 | 1 | 1,188,759 | 0 |
0 | 0 | I need to port some code that relies heavily on lxml from a CPython application to IronPython.
lxml is very Pythonic and I would like to keep using it under IronPython, but it depends on libxslt and libxml2, which are C extensions.
Does anyone know of a workaround to allow lxml under IronPython or a version of lxml tha... | false | 1,200,726 | 0.099668 | 1 | 0 | 1 | Something which you might have already considered:
An alternative is to first port the lxml library to IPy and then your code (depending on the code size). You might have to write some C# wrappers for the native C calls to the C extensions -- I'm not sure what issues, if any, are involved in this with regards to IPy.
... | 0 | 2,349 | 0 | 6 | 2009-07-29T14:36:00.000 | .net,xml,ironpython,python,lxml | How to get lxml working under IronPython? | 1 | 1 | 2 | 1,211,395 | 0 |
0 | 0 | I'm trying to raise an exception on the Server Side of an SimpleXMLRPCServer; however, all attempts get a "Fault 1" exception on the client side.
RPC_Server.AbortTest()
File "C:\Python25\lib\xmlrpclib.py", line 1147, in call
return self.__send(self.__name, args)
File "C:\Python25\lib\xmlrpclib.py", line 1437, i... | false | 1,201,507 | 0.099668 | 1 | 0 | 1 | Yes, this is what happens when you raise an exception on the server side. Are you expecting the SimpleXMLRPCServer to return the exception to the client?
You can only use objects that can be marshalled through XML. This includes
boolean : The True and False constants
integers : Pass in directly
floating-point numbers ... | 0 | 746 | 0 | 0 | 2009-07-29T16:34:00.000 | python,exception,simplexmlrpcserver | Sending an exception on the SimpleXMLRPCServer | 1 | 1 | 2 | 1,202,742 | 0 |
0 | 0 | How can I get the current Windows' browser proxy setting, as well as set them to a value?
I know I can do this by looking in the registry at Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyServer, but I'm looking, if it is possible, to do this without messing directly with the registry. | true | 1,201,771 | 1.2 | 0 | 0 | 3 | urllib module automatically retrieves settings from registry when no proxies are specified as a parameter or in the environment variables
In a Windows environment, if no proxy
environment variables are set, proxy
settings are obtained from the
registry’s Internet Settings section.
See the documentation of urlli... | 0 | 12,290 | 0 | 2 | 2009-07-29T17:14:00.000 | python,windows,proxy,registry | How to set proxy in Windows with Python? | 1 | 1 | 3 | 1,205,881 | 0 |
1 | 0 | I am trying to download mp3 file to users machine without his/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really d... | false | 1,211,363 | 0.132549 | 0 | 0 | 2 | Don't do this.
Most files are cached anyway.
But if you really want to add this (because users asked for it), make it optional (default off). | 0 | 170 | 0 | 0 | 2009-07-31T08:47:00.000 | python,django,web-applications | downloading files to users machine? | 1 | 2 | 3 | 1,211,434 | 0 |
1 | 0 | I am trying to download mp3 file to users machine without his/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really d... | true | 1,211,363 | 1.2 | 0 | 0 | 4 | You can't forcefully download files to a user without his consent. If that was possible you can only imagine what severe security flaw that would be.
You can do one of two things:
count on the browser to cache the media file
serve the media via some 3rd party plugin (Flash, for example) | 0 | 170 | 0 | 0 | 2009-07-31T08:47:00.000 | python,django,web-applications | downloading files to users machine? | 1 | 2 | 3 | 1,211,370 | 0 |
0 | 0 | Is there a way to limit amount of data downloaded by python's urllib2 module ? Sometimes I encounter with broken sites with sort of /dev/random as a page and it turns out that they use up all memory on a server. | false | 1,224,910 | 0.53705 | 0 | 0 | 3 | urllib2.urlopen returns a file-like object, and you can (at least in theory) .read(N) from such an object to limit the amount of data returned to N bytes at most.
This approach is not entirely fool-proof, because an actively-hostile site may go to quite some lengths to fool a reasonably trusty received, like urllib2's ... | 0 | 869 | 0 | 3 | 2009-08-03T22:20:00.000 | python,urllib2 | limit downloaded page size | 1 | 1 | 1 | 1,224,950 | 0 |
0 | 0 | When I try to automatically download a file from some webpage using Python,
I get Webpage Dialog window (I use IE). The window has two buttons, such as 'Continue' and 'Cancel'. I cannot figure out how to click on the Continue Button. The problem is
that I don't know how to control Webpage Dialog with Python. I tried ... | false | 1,225,686 | 0 | 0 | 0 | 0 | You can't, and you don't want to. When you ask a question, try explaining what you are trying to achieve, and not just the task immediately before you. You are likely barking down the wrong path. There is some other way of doing what you are trying to do. | 0 | 2,761 | 0 | 0 | 2009-08-04T04:03:00.000 | python,dialog,webpage | How to control Webpage dialog with python | 1 | 1 | 3 | 1,226,061 | 0 |
0 | 0 | I have a pretty intensive chat socket server written in Twisted Python, I start it using internet.TCPServer with a factory and that factory references to a protocol object that handles all communications with the client.
How should I make sure a protocol instance completely destroys itself once a client has disconnecte... | true | 1,234,292 | 1.2 | 0 | 0 | 0 | ok, for sorting out this issue I have set a __del__ method in the protocol class and I am now logging protocol instances that have not been garbage collected within 1 minute from the time the client has disconnected.
If anybody has any better solution I'll still be glad to hear about it but so far I have already fixed... | 0 | 721 | 1 | 4 | 2009-08-05T16:23:00.000 | python,sockets,twisted,twisted.words | In Twisted Python - Make sure a protocol instance would be completely deallocated | 1 | 1 | 1 | 1,236,382 | 0 |
1 | 0 | I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage.
Also, any thoughts on setting its quality for playback would be great.
Thank you. | false | 1,246,131 | 0.066568 | 0 | 0 | 2 | You may use ctypes module to call functions directly from dynamic libraries. It doesn't require you to install external Python libs and it has better performance than command line tools, but it's usually harder to implement (plus of course you need to provide external library). | 0 | 51,200 | 0 | 21 | 2009-08-07T17:51:00.000 | python,audio,compression | Python library for converting files to MP3 and setting their quality | 1 | 2 | 6 | 1,334,217 | 0 |
1 | 0 | I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage.
Also, any thoughts on setting its quality for playback would be great.
Thank you. | false | 1,246,131 | 0.033321 | 0 | 0 | 1 | Another option to avoid installing Python modules for this simple task would be to just exec "lame" or other command line encoder from the Python script (with the popen module.) | 0 | 51,200 | 0 | 21 | 2009-08-07T17:51:00.000 | python,audio,compression | Python library for converting files to MP3 and setting their quality | 1 | 2 | 6 | 1,246,816 | 0 |
0 | 0 | I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.
Although the python socket mo... | false | 1,253,905 | 0.148885 | 0 | 0 | 3 | Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself).
You'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what i... | 0 | 1,015 | 0 | 4 | 2009-08-10T09:26:00.000 | python,network-programming | Suggestion Needed - Networking in Python - A good idea? | 1 | 2 | 4 | 1,254,288 | 0 |
0 | 0 | I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.
Although the python socket mo... | false | 1,253,905 | 0.049958 | 0 | 0 | 1 | To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code. | 0 | 1,015 | 0 | 4 | 2009-08-10T09:26:00.000 | python,network-programming | Suggestion Needed - Networking in Python - A good idea? | 1 | 2 | 4 | 1,253,945 | 0 |
0 | 0 | What is the best way to map a network share to a windows drive using Python?
This share also requires a username and password. | false | 1,271,317 | 0 | 0 | 0 | 0 | I had trouble getting this line to work:
win32wnet.WNetAddConnection2(win32netcon.RESOURCETYPE_DISK, drive, networkPath, None, user, password)
But was successful with this:
win32wnet.WNetAddConnection2(1, 'Z:', r'\UNCpath\share', None, 'login', 'password') | 0 | 68,179 | 0 | 33 | 2009-08-13T11:09:00.000 | python,windows,mapping,drive | What is the best way to map windows drives using Python? | 1 | 1 | 7 | 20,201,066 | 0 |
0 | 0 | What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment varia... | false | 1,273,297 | 0 | 1 | 0 | 0 | I'd use a firewall on windows, or iptables on linux. | 0 | 3,265 | 0 | 2 | 2009-08-13T17:03:00.000 | python,twisted | Python Twisted: restricting access by IP address | 1 | 1 | 3 | 1,273,455 | 0 |
1 | 0 | I'm trying to scrap a page in youtube with python which has lot of ajax in it
I've to call the java script each time to get the info. But i'm not really sure how to go about it. I'm using the urllib2 module to open URLs. Any help would be appreciated. | false | 1,281,075 | 0.07983 | 0 | 0 | 2 | Here is how I would do it: Install Firebug on Firefox, then turn the NET on in firebug and click on the desired link on YouTube. Now see what happens and what pages are requested. Find the one that are responsible for the AJAX part of page. Now you can use urllib or Mechanize to fetch the link. If you CAN pull the same... | 0 | 4,364 | 0 | 2 | 2009-08-15T03:34:00.000 | python,ajax,screen-scraping | Scraping Ajax - Using python | 1 | 1 | 5 | 3,134,226 | 0 |
0 | 0 | suppose, I need to perform a set of procedure on a particular website
say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.
I know there is a webbrowser module in python, but I want to ... | false | 1,292,817 | 1 | 0 | 0 | 19 | selenium will do exactly what you want and it handles javascript | 0 | 108,234 | 0 | 29 | 2009-08-18T09:23:00.000 | python,browser-automation | How to automate browsing using python? | 1 | 3 | 15 | 3,486,971 | 0 |
0 | 0 | suppose, I need to perform a set of procedure on a particular website
say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.
I know there is a webbrowser module in python, but I want to ... | false | 1,292,817 | 0.013333 | 0 | 0 | 1 | The best solution that i have found (and currently implementing) is :
- scripts in python using selenium webdriver
- PhantomJS headless browser (if firefox is used you will have a GUI and will be slower) | 0 | 108,234 | 0 | 29 | 2009-08-18T09:23:00.000 | python,browser-automation | How to automate browsing using python? | 1 | 3 | 15 | 20,679,640 | 0 |
0 | 0 | suppose, I need to perform a set of procedure on a particular website
say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.
I know there is a webbrowser module in python, but I want to ... | false | 1,292,817 | 0 | 0 | 0 | 0 | httplib2 + beautifulsoup
Use firefox + firebug + httpreplay to see what the javascript passes to and from the browser from the website. Using httplib2 you can essentially do the same via post and get | 0 | 108,234 | 0 | 29 | 2009-08-18T09:23:00.000 | python,browser-automation | How to automate browsing using python? | 1 | 3 | 15 | 3,988,708 | 0 |
0 | 0 | I'm developing an FTP client in Python ftplib. How do I add proxies support to it (most FTP apps I have seen seem to have it)? I'm especially thinking about SOCKS proxies, but also other types... FTP, HTTP (is it even possible to use HTTP proxies with FTP program?)
Any ideas how to do it? | false | 1,293,518 | 0.066568 | 0 | 0 | 2 | Standard module ftplib doesn't support proxies. It seems the only solution is to write your own customized version of the ftplib. | 0 | 18,202 | 0 | 9 | 2009-08-18T12:28:00.000 | python,proxy,ftp,ftplib | Proxies in Python FTP application | 1 | 1 | 6 | 1,293,579 | 0 |
0 | 0 | Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read.
I have a little experience with python, but I've only used mail and imaplib modules for sending mail, n... | false | 1,296,446 | 0.049958 | 1 | 0 | 1 | Just go to the Gmail web interface, do an advanced search by date, then select all and mark as read. | 0 | 5,833 | 0 | 5 | 2009-08-18T20:52:00.000 | python,email,gmail,imap,pop3 | Parse Gmail with Python and mark all older than date as "read" | 1 | 2 | 4 | 1,296,476 | 0 |
0 | 0 | Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read.
I have a little experience with python, but I've only used mail and imaplib modules for sending mail, n... | false | 1,296,446 | 0.049958 | 1 | 0 | 1 | Rather than try to parse our HTML why not just use the IMAP interface? Hook it up to a standard mail client and then just sort by date and mark whichever ones you want as read. | 0 | 5,833 | 0 | 5 | 2009-08-18T20:52:00.000 | python,email,gmail,imap,pop3 | Parse Gmail with Python and mark all older than date as "read" | 1 | 2 | 4 | 1,296,465 | 0 |
0 | 0 | I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above.
Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication ... | false | 1,304,593 | 0 | 1 | 0 | 0 | You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message.
To keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place... | 0 | 1,574 | 0 | 1 | 2009-08-20T07:45:00.000 | python,logging,handler | Python logging SMTPHandler - handling offline SMTP server | 1 | 1 | 2 | 1,304,622 | 0 |
0 | 0 | I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I ha... | false | 1,308,760 | 0 | 0 | 0 | 0 | No problem since the proxy server supports the CONNECT method. | 0 | 9,009 | 0 | 2 | 2009-08-20T20:56:00.000 | php,python,curl,https,urllib2 | cURL: https through a proxy | 1 | 1 | 2 | 1,308,768 | 0 |
0 | 0 | I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, bu... | false | 1,308,879 | 0.132549 | 0 | 0 | 2 | Normally you just listen on 0.0.0.0. This is an alias for all IP addresses. | 0 | 9,577 | 0 | 6 | 2009-08-20T21:17:00.000 | .net,python,networking,sockets | Simulate multiple IP addresses for testing | 1 | 2 | 3 | 1,308,897 | 0 |
0 | 0 | I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, bu... | true | 1,308,879 | 1.2 | 0 | 0 | 5 | A. consider using Bonjour (zeroconf) for service discovery
B. You can assign 1 or more IP addresses the same NIC:
On XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection').
Scroll down to Internet Protocol (TCP/IP), select it and click on [Properties].
If y... | 0 | 9,577 | 0 | 6 | 2009-08-20T21:17:00.000 | .net,python,networking,sockets | Simulate multiple IP addresses for testing | 1 | 2 | 3 | 1,309,096 | 0 |
0 | 0 | Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make... | false | 1,352,760 | 0.462117 | 0 | 0 | 5 | If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data. | 1 | 738 | 1 | 0 | 2009-08-30T00:58:00.000 | python,performance,process,background | Python script performance as a background process | 1 | 1 | 2 | 1,352,777 | 0 |
0 | 0 | How can I download files from a website using wildacrds in Python? I have a site that I need to download file from periodically. The problem is the filenames change each time. A portion of the file stays the same though. How can I use a wildcard to specify the unknown portion of the file in a URL? | false | 1,359,090 | 1 | 0 | 0 | 7 | If the filename changes, there must still be a link to the file somewhere (otherwise nobody would ever guess the filename). A typical approach is to get the HTML page that contains a link to the file, search through that looking for the link target, and then send a second request to get the actual file you're after.
We... | 1 | 1,403 | 0 | 1 | 2009-08-31T19:46:00.000 | python | Wildcard Downloads with Python | 1 | 1 | 2 | 1,359,101 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 1 | 0 | 0 | 10 | On Mac press Ctrl+\ to quit a python process attached to a terminal. | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 48,303,184 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 1 | 0 | 0 | 24 | This post is old but I recently ran into the same problem of Ctrl+C not terminating Python scripts on Linux. I used Ctrl+\ (SIGQUIT). | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 40,704,008 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 0.049958 | 0 | 0 | 3 | On a mac / in Terminal:
Show Inspector (right click within the terminal window or Shell >Show Inspector)
click the Settings icon above "running processes"
choose from the list of options under "Signal Process Group" (Kill, terminate, interrupt, etc). | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 42,792,308 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 0.016665 | 0 | 0 | 1 | Forcing the program to close using Alt+F4 (shuts down current program)
Spamming the X button on CMD for e.x.
Taskmanager (first Windows+R and then "taskmgr") and then end the task.
Those may help. | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 52,672,359 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 1 | 0 | 0 | 57 | If it is running in the Python shell use Ctrl + Z, otherwise locate the python process and kill it. | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 1,364,179 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | false | 1,364,173 | 0.016665 | 0 | 0 | 1 | For the record, what killed the process on my Raspberry 3B+ (running raspbian) was Ctrl+'. On my French AZERTY keyboard, the touch ' is also number 4. | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 54,316,333 | 0 |
0 | 0 | I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this? | true | 1,364,173 | 1.2 | 0 | 0 | 206 | On Windows, the only sure way is to use CtrlBreak. Stops every python script instantly!
(Note that on some keyboards, "Break" is labeled as "Pause".) | 0 | 234,195 | 0 | 142 | 2009-09-01T19:17:00.000 | python | Stopping python using ctrl+c | 1 | 7 | 12 | 1,364,199 | 0 |
0 | 0 | I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to ... | false | 1,365,265 | 1 | 0 | 0 | 47 | Bind the socket to port 0. A random free port from 1024 to 65535 will be selected. You may retrieve the selected port with getsockname() right after bind(). | 0 | 154,753 | 1 | 189 | 2009-09-02T00:07:00.000 | python,sockets,ipc,port | On localhost, how do I pick a free port number? | 1 | 2 | 5 | 1,365,281 | 0 |
0 | 0 | I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to ... | false | 1,365,265 | 0.158649 | 0 | 0 | 4 | You can listen on whatever port you want; generally, user applications should listen to ports 1024 and above (through 65535). The main thing if you have a variable number of listeners is to allocate a range to your app - say 20000-21000, and CATCH EXCEPTIONS. That is how you will know if a port is unusable (used by a... | 0 | 154,753 | 1 | 189 | 2009-09-02T00:07:00.000 | python,sockets,ipc,port | On localhost, how do I pick a free port number? | 1 | 2 | 5 | 1,365,283 | 0 |
0 | 0 | I'm trying to use Twisted in a sort of spidering program that manages multiple client connections. I'd like to maintain of a pool of about 5 clients working at one time. The functionality of each client is to connect to a specified IRC server that it gets from a list, enter a specific channel, and then save the list ... | true | 1,365,737 | 1.2 | 0 | 0 | 4 | The best option is really just to do the obvious thing here. Don't have a loop, or a repeating timed call; just have handlers that do the right thing.
Keep a central connection-management object around, and make event-handling methods feed it the information it needs to keep going. When it starts, make 5 outgoing con... | 0 | 4,725 | 1 | 6 | 2009-09-02T03:45:00.000 | python,twisted | Managing multiple Twisted client connections | 1 | 1 | 3 | 1,408,498 | 0 |
1 | 0 | I want to download a list of web pages. I know wget can do this. However downloading every URL in every five minutes and save them to a folder seems beyond the capability of wget.
Does anyone knows some tools either in java or python or Perl which accomplishes the task?
Thanks in advance. | true | 1,367,189 | 1.2 | 0 | 0 | 5 | Write a bash script that uses wget and put it in your crontab to run every 5 minutes. (*/5 * * * *)
If you need to keep a history of all these web pages, set a variable at the beginning of your script with the current unixtime and append it to the output filenames. | 0 | 2,545 | 0 | 1 | 2009-09-02T11:39:00.000 | python,download,webpage,wget,web-crawler | How to download a webpage in every five minutes? | 1 | 1 | 2 | 1,367,209 | 0 |
0 | 0 | I would like to read a website asynchronously, which isnt possible with urllib as far as I know. Now I tried reading with with plain sockets, but HTTP is giving me hell.
I run into all kind of funky encodings, for example transfer-encoding: chunked, have to parse all that stuff manually, and I feel like coding C, not p... | false | 1,367,453 | 0.049958 | 0 | 0 | 1 | The furthest I came was using modified asynchttp, that codeape suggested. I have tried to use both asyncore/asynchat and asynchttp, with lots of pain. It took me far too long to try to fix all the bugs in it (there's a method handle_read, nearly copied from asyncore, only badly indented and was giving me headaches with... | 0 | 2,352 | 0 | 7 | 2009-09-02T12:39:00.000 | python,web-services,sockets | Reading a website with asyncore | 1 | 1 | 4 | 1,372,289 | 0 |
0 | 0 | I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64,... | true | 1,401,218 | 1.2 | 1 | 0 | 7 | There is no such number as the "maximum number of urls can be used so that we can avoid duplicates" for CRC32.
The problem is that CRC32 can produce duplicates, and it's not a function of how many values you throw at it, it's a function of what those values look like.
So you might have a collision on the second url, if... | 0 | 3,162 | 0 | 5 | 2009-09-09T18:16:00.000 | c#,python,url,crc32,short-url | CRC32 to make short URL for web | 1 | 5 | 6 | 1,401,231 | 0 |
0 | 0 | I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64,... | false | 1,401,218 | 0.132549 | 1 | 0 | 4 | If you're already storing the full URL in a database table, an integer ID is pretty short, and can be made shorter by converting it to base 16, 64, or 85. If you can use a UUID, you can use an integer, and you may as well, since it's shorter and I don't see what benefit a UUID would provide in your lookup table. | 0 | 3,162 | 0 | 5 | 2009-09-09T18:16:00.000 | c#,python,url,crc32,short-url | CRC32 to make short URL for web | 1 | 5 | 6 | 1,401,237 | 0 |
0 | 0 | I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64,... | false | 1,401,218 | 0.033321 | 1 | 0 | 1 | CRC32 means cyclic redundancy check with 32 bits where any arbitrary amount of bits is summed up to a 32 bit check sum. And check sum functions are surjective, that means multiple input values have the same output value. So you cannot inverse the function. | 0 | 3,162 | 0 | 5 | 2009-09-09T18:16:00.000 | c#,python,url,crc32,short-url | CRC32 to make short URL for web | 1 | 5 | 6 | 1,401,243 | 0 |
0 | 0 | I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64,... | false | 1,401,218 | 0 | 1 | 0 | 0 | No, even you use md5, or any other check sum, the URL CAN BE duplicate, it all depends on your luck.
So don't make an unique url base on those check sum | 0 | 3,162 | 0 | 5 | 2009-09-09T18:16:00.000 | c#,python,url,crc32,short-url | CRC32 to make short URL for web | 1 | 5 | 6 | 1,401,286 | 0 |
0 | 0 | I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64,... | false | 1,401,218 | 0.066568 | 1 | 0 | 2 | The right way to make a short URL is to store the full one in the database and publish something that maps to the row index. A compact way is to use the Base64 of the row ID, for example. Or you could use a UID for the primary key and show that.
Do not use a checksum, because it's too small and very likely to conflic... | 0 | 3,162 | 0 | 5 | 2009-09-09T18:16:00.000 | c#,python,url,crc32,short-url | CRC32 to make short URL for web | 1 | 5 | 6 | 1,401,331 | 0 |
0 | 0 | Okay. So I have about 250,000 high resolution images. What I want to do is go through all of them and find ones that are corrupted. If you know what 4scrape is, then you know the nature of the images I.
Corrupted, to me, is the image is loaded into Firefox and it says
The image “such and such image” cannot be displaye... | false | 1,401,527 | 0.119427 | 1 | 0 | 3 | If your exact requirements are that it show correctly in FireFox you may have a difficult time - the only way to be sure would be to link to the exact same image loading source code as FireFox.
Basic image corruption (file is incomplete) can be detected simply by trying to open the file using any number of image librar... | 0 | 23,397 | 0 | 20 | 2009-09-09T19:15:00.000 | php,python,image | How do I programmatically check whether an image (PNG, JPEG, or GIF) is corrupted? | 1 | 1 | 5 | 1,401,566 | 0 |
0 | 0 | How can I find out the http request my python cgi received? I need different behaviors for HEAD and GET.
Thanks! | false | 1,417,715 | 0 | 1 | 0 | 0 | Why do you need to distinguish between GET and HEAD?
Normally you shouldn't distinguish and should treat a HEAD request just like a GET. This is because a HEAD request is meant to return the exact same headers as a GET. The only difference is there will be no response content. Just because there is no response content ... | 0 | 6,773 | 0 | 12 | 2009-09-13T13:12:00.000 | python,http,httpwebrequest,cgi | Detecting the http request type (GET, HEAD, etc) from a python cgi | 1 | 1 | 3 | 1,420,886 | 0 |
0 | 0 | I'm trying to access a SOAP API using Suds. The SOAP API documentation states that I have to provide three cookies with some login data. How can I accomplish this? | true | 1,417,902 | 1.2 | 0 | 0 | 4 | Set a "Cookie" HTTP Request Header having the required name/value pairs. This is how Cookie values are usually transmitted in HTTP Based systems. You can add multiple key/value pairs in the same http header.
Single Cookie
Cookie: name1=value1
Multiple Cookies (seperated by semicolons)
Cookie: name1=value1; name2=val... | 0 | 1,581 | 0 | 2 | 2009-09-13T14:44:00.000 | python,soap,cookies,suds | Sending cookies in a SOAP request using Suds | 1 | 1 | 1 | 1,417,916 | 0 |
0 | 0 | I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also? | false | 1,418,082 | 0.066568 | 0 | 0 | 4 | If you're on Windows, one option is to run the tests under a different user account. This means the browser and java server will not be visible to your own account. | 0 | 90,474 | 0 | 93 | 2009-09-13T16:07:00.000 | python,selenium,selenium-rc | Is it possible to hide the browser in Selenium RC? | 1 | 4 | 12 | 1,750,751 | 0 |
0 | 0 | I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also? | false | 1,418,082 | 0.049958 | 0 | 0 | 3 | This is how I run my tests with maven on a linux desktop (Ubuntu). I got fed up not being able to work with the firefox webdriver always taking focus.
I installed xvfb
xvfb-run -a mvn clean install
Thats it | 0 | 90,474 | 0 | 93 | 2009-09-13T16:07:00.000 | python,selenium,selenium-rc | Is it possible to hide the browser in Selenium RC? | 1 | 4 | 12 | 11,261,393 | 0 |
0 | 0 | I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also? | false | 1,418,082 | 0 | 0 | 0 | 0 | On MacOSX, I haven't been able to hide the browser window, but at least I figured out how to move it to a different display so it doesn't disrupt my workflow so much. While Firefox is running tests, just control-click its icon in the dock, select Options, and Assign to Display 2. | 0 | 90,474 | 0 | 93 | 2009-09-13T16:07:00.000 | python,selenium,selenium-rc | Is it possible to hide the browser in Selenium RC? | 1 | 4 | 12 | 24,662,478 | 0 |
0 | 0 | I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also? | false | 1,418,082 | 0 | 0 | 0 | 0 | Using headless Chrome would be your best bet, or you could post directly to the site to interact with it, which would save a lot of compute power for other things/processes. I use this when testing out web automation bots that search for shoes on multiple sites using cpu heavy elements, the more power you save, and the... | 0 | 90,474 | 0 | 93 | 2009-09-13T16:07:00.000 | python,selenium,selenium-rc | Is it possible to hide the browser in Selenium RC? | 1 | 4 | 12 | 55,484,939 | 0 |
0 | 0 | I have a python program which starts up a PHP script using the subprocess.Popen() function. The PHP script needs to communicate back-and-forth with Python, and I am trying to find an easy but robust way to manage the message sending/receiving.
I have already written a working protocol using basic sockets, but it doesn'... | false | 1,424,593 | 0 | 1 | 0 | 0 | You could look at shared memory or named pipes, but I think there are two more likely options, assuming at least one of these languages is being used for a webapp:
A. Use your database's atomicity. In python, begin a transaction, put a message into a table, and end the transaction. From php, begin a transaction, take... | 0 | 2,322 | 0 | 1 | 2009-09-15T00:47:00.000 | php,python,ipc | Easy, Robust IPC between Python and PHP | 1 | 1 | 2 | 1,424,687 | 0 |
0 | 0 | I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work.
A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a l... | false | 1,443,173 | 0 | 1 | 0 | 0 | Yes, Python can do that, and so long as the methods exposed in the ResourceXSimulated module "look and smell" like these of the original module, the application should not see much any difference (other than, I'm assuming, bogus data fillers, different response times and such). | 0 | 256 | 0 | 1 | 2009-09-18T08:12:00.000 | python,testing,mocking,module,monkeypatching | Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated) | 1 | 2 | 5 | 1,443,195 | 0 |
0 | 0 | I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work.
A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a l... | false | 1,443,173 | 0.039979 | 1 | 0 | 1 | Yes, it's possible. Some starters:
You can "divert" modules by manipulating sys.modules. It keeps a list of imported modules, and there you can make your module appear under the same name as the original one. You must do this manipulating before any module that imports the module you want to fake though.
You can also m... | 0 | 256 | 0 | 1 | 2009-09-18T08:12:00.000 | python,testing,mocking,module,monkeypatching | Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated) | 1 | 2 | 5 | 1,443,281 | 0 |
0 | 0 | I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket.
I want to transmit screen shots periodically to remote computer.
I have ... | true | 1,451,349 | 1.2 | 0 | 0 | 2 | There are two problems here.
First problem, you will need to be able to address the remote party. This is related to what you referred to as "does not work over Internet as most ISP don't allow TCP server socket". It is usually difficult because the other party could be placed behind a NAT or a firewall.
As for the s... | 0 | 1,737 | 0 | 1 | 2009-09-20T16:01:00.000 | python,sockets | How to stream binary data in python | 1 | 2 | 2 | 1,451,356 | 0 |
0 | 0 | I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket.
I want to transmit screen shots periodically to remote computer.
I have ... | false | 1,451,349 | 0.291313 | 0 | 0 | 3 | SOCK_STREAM is the correct way to stream data.
What you're saying about ISPs makes very little sense; they don't control whether or not your machine listens on a certain port on an interface. Perhaps you're talking about firewall/addressing issues?
If you insist on using UDP (and you shouldn't because you'll have to wo... | 0 | 1,737 | 0 | 1 | 2009-09-20T16:01:00.000 | python,sockets | How to stream binary data in python | 1 | 2 | 2 | 1,451,365 | 0 |
0 | 0 | I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ?
In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some ... | false | 1,458,087 | 0.462117 | 0 | 0 | 5 | SOCK_DGRAM means you want to send packets by UDP -- no order guarantee, no guarantee of reception, no guarantee of lack of repetition. SOCK_STREAM would imply TCP -- no packet boundary guarantee, but (unless the connection's dropped;-) guarantee of order, reception, and no duplication. TCP/IP, the networking model tha... | 0 | 2,274 | 0 | 2 | 2009-09-22T04:23:00.000 | python,sockets | Python socket programming and ISO-OSI model | 1 | 2 | 2 | 1,458,109 | 0 |
0 | 0 | I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ?
In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some ... | true | 1,458,087 | 1.2 | 0 | 0 | 4 | To answer your immediate question, if you're using SOCK_STREAM, then you're actually using TCP, which is an implementation of the transport layer which does take care of packet ordering and integrity for you. So it sounds like that's what you want to use. SOCK_DGRAM is actually UDP, which doesn't take care of any integ... | 0 | 2,274 | 0 | 2 | 2009-09-22T04:23:00.000 | python,sockets | Python socket programming and ISO-OSI model | 1 | 2 | 2 | 1,458,734 | 0 |
0 | 0 | I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?
I'm looking for a solution in Python (and preferably boto library). | false | 1,464,961 | 0 | 0 | 0 | 0 | Note that the 'copy' method on the Key object has a "preserve_acl" parameter (False by default) that will copy the source's ACL to the destination object. | 0 | 9,405 | 0 | 17 | 2009-09-23T09:34:00.000 | python,amazon-s3,boto | How to clone a key in Amazon S3 using Python (and boto)? | 1 | 3 | 6 | 7,366,501 | 0 |
0 | 0 | I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?
I'm looking for a solution in Python (and preferably boto library). | false | 1,464,961 | 0.132549 | 0 | 0 | 4 | Browsing through boto's source code I found that the Key object has a "copy" method. Thanks for your suggestion about CopyObject operation. | 0 | 9,405 | 0 | 17 | 2009-09-23T09:34:00.000 | python,amazon-s3,boto | How to clone a key in Amazon S3 using Python (and boto)? | 1 | 3 | 6 | 1,466,148 | 0 |
0 | 0 | I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?
I'm looking for a solution in Python (and preferably boto library). | false | 1,464,961 | 0.066568 | 0 | 0 | 2 | S3 allows object by object copy.
The CopyObject operation creates a copy of an object when you specify the key and bucket of a source object and the key and bucket of a target destination.
Not sure if boto has a compact implementation. | 0 | 9,405 | 0 | 17 | 2009-09-23T09:34:00.000 | python,amazon-s3,boto | How to clone a key in Amazon S3 using Python (and boto)? | 1 | 3 | 6 | 1,465,978 | 0 |
0 | 0 | What steps would be necessary, and what kind of maintenance would be expected if I wanted to contribute a module to the Python standard API? For example I have a module that encapsulates automated update functionality similar to Java's JNLP. | false | 1,465,302 | 0.197375 | 0 | 0 | 2 | First, look at modules on pypi. Download several that are related to what you're doing so you can see exactly what the state of the art is.
For example, look at easy_install for an example of something like what you're proposing.
After looking at other modules, write yours to look like theirs.
Then publish informati... | 1 | 129 | 0 | 4 | 2009-09-23T10:56:00.000 | python,api | What is involved in adding to the standard Python API? | 1 | 1 | 2 | 1,465,505 | 0 |
0 | 0 | Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ?
The scenario is like this:
Receive udp buffer on a socket.
The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serial... | false | 1,487,582 | 0.132549 | 0 | 0 | 2 | What you receive from the udp socket is a byte string -- that's all the "type of object or class name" that's actually there. If the byte string was built as a serialized object (e.g. via pickle, or maybe marshal etc) then you can deserialize it back to an object (using e.g. pickle.loads) and then introspect to your h... | 0 | 642 | 0 | 1 | 2009-09-28T15:08:00.000 | python,sockets,udp | Type of object from udp buffer in python using metaclasses/reflection | 1 | 2 | 3 | 1,487,619 | 0 |
0 | 0 | Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ?
The scenario is like this:
Receive udp buffer on a socket.
The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serial... | false | 1,487,582 | 0 | 0 | 0 | 0 | Updated answer after updated question:
"But the type of message is not known at this time. So can't de-serialize into appropriate message."
What you get is a sequence of bytes. How that sequence of types should be interpreted is a question of how the protocol looks. Only you know what protocol you use. So if you don't ... | 0 | 642 | 0 | 1 | 2009-09-28T15:08:00.000 | python,sockets,udp | Type of object from udp buffer in python using metaclasses/reflection | 1 | 2 | 3 | 1,487,602 | 0 |
0 | 0 | I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear...
(the flash client work find with my old php chat server, I rewrote the server in python to gain performance)
The connexion is establish between the flash client and the... | false | 1,489,931 | 0 | 0 | 0 | 0 | I find out that the default delimiter for line, use by Twisted is '\r\n'. It can be overwrite in a your children class with:
LineOnlyReceiver.delimiter = '\n' | 0 | 927 | 0 | 1 | 2009-09-29T00:02:00.000 | python,flash,twisted | Chat server with Twisted framework in python can't receive data from flash client | 1 | 2 | 2 | 1,490,530 | 0 |
0 | 0 | I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear...
(the flash client work find with my old php chat server, I rewrote the server in python to gain performance)
The connexion is establish between the flash client and the... | true | 1,489,931 | 1.2 | 0 | 0 | 1 | Changing LineOnlyReceiver.delimiter is a pretty bad idea, since that changes the delivery for all instances of LineOnlyReceiver (unless they've changed it themselves on a subclass or on the instance). If you ever happen to use any such code, it will probably break.
You should change delimiter by setting it on your Lin... | 0 | 927 | 0 | 1 | 2009-09-29T00:02:00.000 | python,flash,twisted | Chat server with Twisted framework in python can't receive data from flash client | 1 | 2 | 2 | 1,729,776 | 0 |
0 | 0 | I'm trying to create XML using the ElementTree object structure in python. It all works very well except when it comes to processing instructions. I can create a PI easily using the factory function ProcessingInstruction(), but it doesn't get added into the elementtree. I can add it manually, but I can't figure out ... | false | 1,489,949 | 0.07983 | 1 | 0 | 2 | Yeah, I don't believe it's possible, sorry. ElementTree provides a simpler interface to (non-namespaced) element-centric XML processing than DOM, but the price for that is that it doesn't support the whole XML infoset.
There is no apparent way to represent the content that lives outside the root element (comments, PIs,... | 0 | 4,332 | 0 | 6 | 2009-09-29T00:09:00.000 | python,xml,elementtree | ElementTree in Python 2.6.2 Processing Instructions support? | 1 | 1 | 5 | 1,490,057 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.