Red Hat magazine articles aren't official documentation. Additionally RHM is no longer published (hasn't been for years.)
The difference between what the article is talking about and what we support in RHEL is a matter of quality assurance and testing - we can only support what we can reasonably test and what we can commit to being able to dedicate to issue reproduction and resolution in the course of a support case. Linux-cluster and GFS/GFS2 will scale well past 16 nodes but Red Hat doesn't test or do engineering and development work on more than 16.
The other side of the equation is that linux-cluster + GFS2 on RHEL as marketed by Red Hat is a high availability product - not a distributed computing or "big data" product. It's hard to make a case for HA at large scale. For HA purposes 16 nodes is on the generous side - I rarely see clusters greater than 4 nodes in the course of my work with cluster customers. Cluster and GFS2 could be spun into the back-bone for distributed computing or big data deployments but that's not how Red Hat tests, develops, and thus supports the combination of those products. If you are doing a research, academic, community, or personal project and don't require enterprise support you could likely do some really interesting things with GFS2/cluster at large scale - but for supported deployments with a commitment from Red Hat to test, QA, develop, and resolve issues the limit is 16.
Hope this information helps you.
Software Maintenance Engineer
Support Engineering Group
Red Hat, Inc.
On Jan 5, 2012, at 12:07 PM, Dax Kelson wrote:
Looking in older Red Hat Magazine article by Matthew O'Keefe such as: