[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Call for help

The challenging thing was getting all the content in a usable form. The
Wiki is riddled with SPAM links that point at action=create links on the
twiki itself. Since wget is drain bamaged in that it first downloads the
link, *then* decides whether it should have done so or not, it slowed
things down considerably. It would probably be easier in the future to
have access to a tarball of the site's contents made on the server.

I proned the pages I received so that I only scanned the latest versions
of the various twiki pages. I then ran 

   grep -R -H -o -n -i fedora.us * >../FL_references_to_fedora_us.txt

>From the top of the downloaded tree. The results look like this:


And so forth. The output is available at

On Thu, 2005-03-03 at 11:43 -0800, Jesse Keating wrote:
> On Thu, 2005-03-03 at 19:28 +0100, Steffen Grunewald wrote:
> > What about using wget to create a working copy of the wiki contents,
> > then recursive grep through it?
> However you want to accumulate a list is fine with me (:

Howard Owen       RHCE, BMOC, GP "Even if you are on the right
EGBOK Consultants Linux Architect track, you'll get run over if you
hbo egbok com     +1-650-218-2216 just sit there." - Will Rogers

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]