CCNet-Pure-Pytorch icon indicating copy to clipboard operation
CCNet-Pure-Pytorch copied to clipboard

Segmentation fault (core dumped)

Open YLONl opened this issue 6 years ago • 7 comments

Who got this error? I googled and got some reasons about core and GPU version。 But i dont konw how to solve?

YLONl avatar Jan 15 '20 02:01 YLONl

i know the wrong code. "import networks."

YLONl avatar Jan 15 '20 02:01 YLONl

what does the Traceback show?

Serge-weihao avatar Jan 15 '20 03:01 Serge-weihao

No Traceback shows. Just the tip.

YLONl avatar Jan 15 '20 07:01 YLONl

I would like to know more about the energy_H and energy_W variables, what the compute and how they help to acheive. Also how the Criss Cross Attention is achieved energy_H = (torch.bmm(proj_query_H, proj_key_H)+self.INF(m_batchsize, height, width)).view(m_batchsize,width,height,height).permute(0,2,1,3) energy_W = torch.bmm(proj_query_W, proj_key_W).view(m_batchsize,height,width,width) concate = self.softmax(torch.cat([energy_H, energy_W], 3))

kumartr avatar May 26 '20 15:05 kumartr

to aggregate the values from the same column(energy_H) and row(energy_W ) of the query. self.INF will make one of the overlapped position to be zero.

Serge-weihao avatar May 27 '20 05:05 Serge-weihao

Thanks a lot Serge for your kind reply, I will try to relook at your code with this insight. One more question - Where are the H+W-1 'channels' for the Attention maps computed ?

Would it be possible to connect sometime over short zoom call, to clarify few other points My email address is [email protected] My LinkedIn link is as below https://www.linkedin.com/in/kumartr/

kumartr avatar May 27 '20 09:05 kumartr

concate = self.softmax(torch.cat([energy_H, energy_W], 3)) computes the attention maps and one overlapped position was seted to 0 by the self.INF, so there are H+W-1 non-zero values + 1 zero value.

Serge-weihao avatar May 28 '20 05:05 Serge-weihao