Forums / Support / v1.91, Build 955: download extended headers

v1.91, Build 955: download extended headers

When I tried using the extended headers option today, the file sizes showed-up on the search list, but everything it tried to download came back with "423 no such article in group".

I think that's all that I changed, except for playing with the mini group list editor - but that was with a separate .lst file I just used for testing.
 

BinaryBoy's reply to Stoner #214 @

I'm not seeing this. It might be a temporary problem with the header database on the server still pointing to deleted articles. You could try a different newsgroup or different server and see if the problem still exists.
 

Stoner's reply to Stoner #217 @

Well, for the time being, I've disabled the option, and when things get back to normal, I'll try it again. If they don't, I'll look into server issues - but I don't think that's the problem.

When I do try it again, are there any particular logging options and/or files I can send that would help? I don't want to do this more than I have to - it really screws things up.
 

BinaryBoy's reply to Stoner #219 @

-log will log the article numbers Binary Boy is requesting. The cache file itself will hold the article numbers that the server says exist. You could send me log.txt if you like or even both files.

If you want to check it out yourself, you could look in the log.txt for the HEAD and BODY commands to get the article number Binary Boy is trying to download. Then check to see if that same article number is in the cache file.

If the number isn't in the cache file, it means Binary Boy is messing things up internally and requesting non-existant articles. If the article is in the cache file then it either means the server is returning bad information or possibly Binary Boy has trashed the cache file somehow.
 

Stoner's reply to Stoner #220 @

Thanks for the info!

I deleted the cache files after it happened, just to eliminate them as a source of problems. So, If things go OK tomorrow (later today, I guess), I'll enable logging and check the things you mentioned.
 

Stoner's reply to Stoner #223 @

Well, things have gone from bad to worse. I decided to try and start fresh, and did the following:

1) Changed all the next article numbers in my active list to 0 in all of the newsgroups.
2) Deleted all of the cache and log files.
3) Deleted all the history files.
4) Performed a master reset.

Then I manually ran BB to search for the last 100 files, and I still got the 423's - not on everything, just here and there. But, more here then there. The Log tab showed that it's looking for a post greater than the range it's showing - anywhere from 1 to 1000+ greater.

I must be tired and missing something. I'm going to bed - maybe things will be magically better tomorrow. If there's anything else I should do to restart a server from scratch, please let me know. To my mind, I should never receive a 423 unless the files I'm looking for don't exist, but with asking for the last 100, that should rarely happen.
 

BinaryBoy's reply to Stoner #224 @

Right-click on the subject window, go to the Remember submenu and click Highest Scanned. Then use "Last New" rather than "Last" Starting Point. Last and All don't update the article pointer unless you're remembering highest downloaded rather than highest scanned.

If you want to reset the article pointers to 0 with Master Reset, you need to have a session open with the list loaded. Otherwise you need to do it in the group list editor with the Set All to 0 button.

Where were you seeing the article numbers/ranges? If you have the log enabled, you can see the first and last article numbers available from the server in a line like this:

09:22:57: GetSubjects2(): First article: 35705, Last article: 37081, (Approx) Total: 517. Next: 0, Range: 3000

And a few lines later you'll see the range it actually requests:

09:22:57: Sending[364]: XOVER 34082-37081

Last New can request beyond the end of the available articles just to be safe.
 

Stoner's reply to Stoner #233 @

I'm seeing (what I assumed to be) the article numbers/ranges on the Log tab, like this:

423 no such article in group (3095103 3079495-3095101).

But, I only caught this single occurence today, so it looks like everything's back to normal. I think I didn't help matters any when I used Last instead of Last New, and remember last downloaded, rather than last scanned.

Anyway, I'm going to install 960, make sure everything's still OK with that, then try the exteneded headers again (this time I'm going to keep a copy of my lst file so I can just put it back if things go bad).
 

BinaryBoy's reply to Stoner #234 @

Everything after the error number is human readable text with no specific format but it does look like 3095103 was the requested article number. You could check for 3095103 in the cache file. If you had the log enabled, you could also check to see if Binary Boy actually requested that article. Or you could send these files to me.

It's odd how it's only 2 off. I would expect it to either be something crazy like a negative number or a number within the range that had been deleted.
 

Stoner's reply to Stoner #235 @

Unfortunately, I deleted the log files when I installed 960 so I wouldn't have so much to sift through. I did check the cache files, and the file is there (at least now, it is). Now that we can choose where to place the log files, I'm keeping logging on for a while, and see if I can catch another one.

Most of the ones I saw yesterday were only off by 2 - others were off by 102, 502, 1002, and a few others were just some number beyond the current range.

So far, things are working fine with the extended headers in 960. Maybe, like you suggested, the server burped. I'll keep an eye out and give you some more details if it happens again.

BTW, I didn't notice until today that you could use Search in WinXP to look for a word or phrase inside of files - really handy for searching the log and cache files!
 

Stoner's reply to Stoner #236 @

I got a message from my NG provider earlier saying that they're upgrading their servers for increased retention, so I'm blaming yesterday's glitch on some sort of pre-conversion efforts.

At least they told me in advance, and aren't going to be loosing articles post transition :)