We have put together a short survey to capture your feedback in terms of what BT can do when you experience home Wi-Fi issues. We would appreciate it if you could take a couple of minutes to complete a short survey. Read more from here, Fix my home Wi-Fi
I've only been reading these forums for the past month, due to problems I've experienced with a Smart Hub (HH6), and it is obvious that many people are being sent replacement hubs after reporting faults which have not been resolved.
Does anyone know what happens to 'faulty' hubs that are returned to BT ? Are these put through a process to test them, then any which are NDF (no defect found) are repacked and sent to other customers, either as new or as replacements for yet more 'faulty' hubs ?
The reason I ask is because there seem to be some problems which had been thought to be fixed (eg by new firmware) only for those problems to reappear and be reported by new users after some weeks or months. That seems to be the case with the random and frequent hub reboots, or the 'green light' issues that keep cropping up.
Having worked as a test engineer in the electronics and IT industry for many years, I know from experience that recycling NDF units can lead to faulty units recirculating due to inadequate test processes often due to the vital customer feedback about the defects not being 'attached' the the suspect units as they go back through the test process. Diagnosing some of the hard to find defects often requires advanced hands-on debugging, which for most companies these days it too expensive, in whach case the next best option is to scrap anything that has been round the process several times, but I suspect some cheepskates will just put them in a new box and ship them back to another customer.
Perhaps one of the Mods could tell us about the process used by BT.
I think that's a rather simplistic view. I very much doubt if they 'fix' them at all as most of the time I doubt they are aware of what the fault is due to the poor testing regime. If they are tested and sent out faulty in the first place why should the same tests produce a different result when sent back.
That's my thinking too. If the number being returned are as high as this forum suggests (accepting that many might be returned without any posting on the forum) then simply scrapping them would be unsustainable.
Some will genuinely be without fault and have been returned either because they didn't meet expectations or because of simple 'finger trouble', or users didn't read the manual. Likewise there will be some that have easy to identify defects.
My o p was really to address the grey area in the middle, namely units with obscure, intermittent problems that are hard to reproduce or only occur after a specific sequennce of events or environmental conditions such as overheating, some of which are external to the unit and the user. It is likely that these originally passed test. Having worked in the computer hardware test industry for years I can say with confidence that there are 2 main types of test regimes, the first being an in-circuit test (ICT) of the electronic circuit board which weeds out manufacturing defects like wrong/missing/misplaced components, short circuits, open circuits and some functional component defects (bad chips). The second type of test is a functional test, either of the circuit board by itself or of the finished assembled unit in its covers, and in some cases both of these. Functional testing can pick up defects that an in-circuit test is incapable of detecting, such as bad firmware or damaged connectors, or more obscure faults that only show up when the components are running at operational speeds.
The gold standard is to use ICT to weed out the simple to identify manufacturing defects, which form the bulk of defects, then to use a functional test if the product has a complex operational function and hence more likely to have the obscure defects that slip through ICT. I would say modem/routers could fall in to this category.
Obviously, detailed debugging of all returns would be expensive, so again good practice is to simply put them through the normal inspection and test process and carry out any repairs but to mark any that are NDF (no defect found) so that they can be identified if they appear back in the test/repair process a second time. Any which appear for a second or third time would normally be scrapped, but depending on the numbers it may be worth the effort and cost occasionally to fully debug these with the aim of using the knowledge gained to improve the main manufacturing assembly and test processes and thereby reduce the return rate.
I would suggest that the OP and I have a far better idea of what is involved than you do and its you that doesn't know what he is talking about.
Don't want to get in a bun fight with TelephoneBob, but as a BSc and PhD graduate with over 35 years experience working for IBM, about 12 years of which was designing and building functional test equipment as well as programming commercial in-circuit test equipment, I think I know what I'm talking about. Indeed the scenarios I described were some of the areas where I developed business cases to determine the right mix of functional vs in-circuit testing, and analysis of so called NDFs was a significant part of process improvement programmes.
Do you have an equivalent background in the electronics and computer industry ?
My experience of replacement HH5s include 2 'new' units which when installed showed entries for wireless devices in the network diag'
I do not use any wireless
Not even 'cleaned out'