[Cluster-devel] cluster/group/gfs_controld plock.c recover.c
teigland at sourceware.org
teigland at sourceware.org
Wed Aug 2 20:50:40 UTC 2006
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: teigland at sourceware.org 2006-08-02 20:50:40
Modified files:
group/gfs_controld: plock.c recover.c
Log message:
- complain and ignore checkpoint sections with a bad size
- do checkpoint for new nodes if low node in charge of that failed
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/plock.c.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&r1=1.6&r2=1.7
--- cluster/group/gfs_controld/plock.c 2006/08/02 19:23:41 1.5
+++ cluster/group/gfs_controld/plock.c 2006/08/02 20:50:40 1.6
@@ -1235,6 +1235,12 @@
iov.readSize);
section_len = iov.readSize;
+ if (!section_len || section_len % sizeof(struct pack_plock)) {
+ log_error("retrieve_plocks: bad section len %d %s",
+ section_len, mg->name);
+ continue;
+ }
+
unpack_section_buf(mg, desc.sectionId.id, desc.sectionId.idLen);
}
--- cluster/group/gfs_controld/recover.c 2006/08/02 18:27:57 1.6
+++ cluster/group/gfs_controld/recover.c 2006/08/02 20:50:40 1.7
@@ -1887,15 +1887,23 @@
/* New mounters may be waiting for a journals message that a failed node (as
low nodeid) would have sent. If the low nodeid failed and we're the new low
nodeid, then send a journals message to any nodes for whom we've not seen a
- journals message. */
+ journals message. We also need to checkpoint the plock state for the new
+ nodes to read after they get their journals message. */
void resend_journals(struct mountgroup *mg)
{
struct mg_member *memb;
+ int stored_plocks = 0;
list_for_each_entry(memb, &mg->members, list) {
if (!memb->needs_journals)
continue;
+
+ if (!stored_plocks) {
+ store_plocks(mg);
+ stored_plocks = 1;
+ }
+
log_group(mg, "resend_journals to %d", memb->nodeid);
send_journals(mg, memb->nodeid);
}
More information about the Cluster-devel
mailing list