I have reproduced this on Karmic by manually assembling, stopping, and reassembling the array based on two lvm volumes. When mdadm --incremental is run on the first degraded leg of the mirror, it activates it since it now has one out of one disk with the second disk flagged as faulty, removed. You would think that the second disk would show the first as faulty,removed as well, but it only shows it as removed. When mdadm --incremental is run on the second disk, it happily starts using it without a resync. I believe this should fail and refuse to use the second disk until you manually re-add it to the array, causing a full resync. I have mailed the linux-raid mailing list about this.
I have reproduced this on Karmic by manually assembling, stopping, and reassembling the array based on two lvm volumes. When mdadm --incremental is run on the first degraded leg of the mirror, it activates it since it now has one out of one disk with the second disk flagged as faulty, removed. You would think that the second disk would show the first as faulty,removed as well, but it only shows it as removed. When mdadm --incremental is run on the second disk, it happily starts using it without a resync. I believe this should fail and refuse to use the second disk until you manually re-add it to the array, causing a full resync. I have mailed the linux-raid mailing list about this.