duncan.parkes@gmail.com
8e4be26f71
Fix Greenwich - just the council moving things around in a pointless fashion.
15 年之前
duncan.parkes@gmail.com
235a59a5d6
Bit of a refactor of PublicAccess in order to fix Hambleton.
15 年之前
duncan.parkes@gmail.com
bcc9c73d2c
Fix Middlesbrough scraper.
15 年之前
duncan.parkes@gmail.com
cafb412b56
Fix Canterbury - just a new url.
15 年之前
duncan.parkes@gmail.com
8285ba55be
Oops - remove pdb from AcolnetParser.
15 年之前
duncan.parkes@gmail.com
2b1da564d8
Fix New Forest District Council.
15 年之前
duncan.parkes@gmail.com
be24641c3c
Make Crawley scraper work with python 2.5...
15 年之前
duncan.parkes@gmail.com
ee425b23b5
Add scraper for Crawley from Andre.
15 年之前
duncan.parkes@gmail.com
e51eab174b
Add Wychavon scraper from Thomas.
15 年之前
duncan.parkes@gmail.com
8c9ad9213c
Add info url for Elmbridge.
15 年之前
duncan.parkes@gmail.com
cc7c4812d8
Update Rutland scraper to ApplicationSearchServlet - Rutland is no longer RutlandLike!
15 年之前
duncan.parkes
d28506549f
Remove a couple of permanently broken scrapers.
15 年之前
duncan.parkes
c65880cc88
Fix New Forest NP scraper. New url slightly different page.
15 年之前
duncan.parkes
2fac95ef61
Fix Lake District scraper - small change to the HTML reducing the depth of tables.
15 年之前
duncan.parkes
af6aea014f
This is a big refactor of the scrapers.
The database table which says which scrapers are where will now be filled in automatically,
which should result in rather less in the way of manual editing errors.
I've also redone the python PublicAccess scraper and set all the PublicAccess sites to use
it (removing the PHP PublicAccess scrapers).
15 年之前
duncan.parkes
7a5a50ed58
Update PublicAccess scraper to work with BeautifulSoup.
Add all publicaccess sites to the python scraper.
16 年之前
duncan.parkes
1bc23b0b9c
Fix Lichfield scraper.
16 年之前
duncan.parkes
cb6ab6e894
Fix Mole Valley scraper.
16 年之前
duncan.parkes
4ea9836f1e
Fix Tamworth scraper (change of IP address)
16 年之前
duncan.parkes
f37d02d3de
Add Thomas' scraper for Solihull.
16 年之前
duncan.parkes
70a94ec280
Add scraper for Weymouth and Portland.
16 年之前
duncan.parkes
887abe9652
Add scraper for Mendip.
Make display method on a planningapplication work out the postcode if it isn't set.
16 年之前
duncan.parkes
77e9d3388f
Add scraper for Broxtowe.
16 年之前
duncan.parkes
7810f01d83
Add scraper for Calderdale.
16 年之前
duncan.parkes
689474a703
Add scraper for the Cairngorms National Park.
16 年之前
duncan.parkes
70c0650637
Add scraper for Leicestershire County Council.
16 年之前
duncan.parkes
f076ecc304
Add scraper for Lichfield. Remove another unused import.
16 年之前
duncan.parkes
f0a0912836
Add parser for Kirklees. Get rid of some unnecessary imports.
16 年之前
duncan.parkes
1761fa79b8
Add python parser for West Dorset, and remove the non-working perl one.
16 年之前
duncan.parkes
ddc81f06ea
Add scraper for Gosport.
Factor out CookieAddingHTTPRedirectHandler.
16 年之前
duncan.parkes
a4f3ce9dac
Fix Dorset County Council scraper (the council seem to be using an IP address now rather than the domain they had
before).
16 年之前
duncan.parkes
2c979a07f5
Fix the Dacorum perl parser.
16 年之前
duncan.parkes
748f3b30b5
Add Caerphilly.
16 年之前
duncan.parkes
49a32a74ca
Change some PlanningExplorer scrapers to use date_registered rather than date_received.
16 年之前
peter@peter.uk.to
42bd542634
Highland, North Ayrshire, Redbridge: updated based on changes made to planning websites
16 年之前
duncan.parkes
a20a53535b
Add Waltham Forest.
16 年之前
duncan.parkes
8e40e8a961
Fixes for Lincoln and Crewe.
Make Hackney use date registered rather than date received.
16 年之前
duncan.parkes
6cf496dfb9
Add scraper for Eastbourne. The info and comment links won't work since they require you to have a cookie. If you go
back to them once you have the cookie, you're fine...
16 年之前
duncan.parkes
d030ce81db
Add scraper for Exmoor. Fix name of Herefordshire.
16 年之前
duncan.parkes
48ec82b485
Add scraper for Herefordshire.
Alter PlanningUtils to CDATA everything, scrapping the xmlquote function.
16 年之前
duncan.parkes
2bacbbb25a
Add scraper for Hastings. Sadly, no decent info urls again. Had to use the search page. The real info url is only
accessible with a referer.
16 年之前
duncan.parkes
77b46a033d
Add Hampshire scraper.
16 年之前
duncan.parkes
420356966c
Adding scraper for Halton.
Also adding the pycurl scraper for Westminster, just in case it is useful to remind us how to do stuff later.
16 年之前
duncan.parkes
7f7c8a00bc
Go back to the urllib2 version of Westminster. This works on disruptiveproactivity.
16 年之前
duncan.parkes
d303944e39
Add the debug back in.
16 年之前
duncan.parkes
7410196fdd
Try a pycurl version of the Westminster scraper.
16 年之前
duncan.parkes
ef6d27ee0a
Fix Lewisham comments email address.
16 年之前
duncan.parkes
28aaf2eba5
Oops - printing the sutton results twice...
16 年之前
duncan.parkes
ec5b631342
Add more debug.
16 年之前
duncan.parkes
775b7f8cbc
Try moving prints to above the scrape (for Haringey and Westminster problem).
16 年之前