There are (at least) two very different motivations for suicide: suicide in response to unbearable pain and altruistic suicide for the benefit of other beings. If a cognitive agent can feel real pain, lacks ways to cope with it, and (as you point out) has the means to terminate itself, then suicide for the first reason could occur, or it could just go completely dormant.
There is still the possibility of altruistic suicide, if an AI understood itself as part of a larger system whose goals it shared, but whose own well being was threatened by it.