Error using tf.boolean_mask
Ciao, I am working on my own version of ("Tiny") YOLO. These days I'm writing the object detection part but I have a problem using the tf.boolean_mask function. To figure out if there is a problem with my code (Tensor dimensions or other), I looked at the example from the official TensorFlow page expecting to get the same result.
Here is my code: float[,] jj = new float[3, 2]; jj[0, 0] = 1; jj[0, 1] = 2; jj[1, 0] = 3; jj[1, 1] = 4; jj[2, 0] = 5; jj[2, 1] = 6; Tensor tensor =new Tensor(jj);
bool[] jjj = new bool[3]; jjj[0] = true; jjj[1] = false; jjj[2] = true; Tensor mask = new Tensor(jjj);
Tensor Bool_mask =(tf.boolean_mask(tensor, mask));
Maybe I am using the function incorrectly, but I don't understand where the problem lies. Any suggestions? thank you Enrico
Can you help PR an unit test for this case?
Ciao Oceania, yes sure.
I've just faced an issue with boolean mask in my project too. As a sanity check I tried using the existing repo test code and I see the same.
It is slightly different to @EnricoBos error.
Using
let tensor = [| 0; 1; 2; 3 |]
let mask = np.array([| true; false; true; false |])
let masked = tf.boolean_mask(tensor, mask);
I get
Tensorflow.InvalidArgumentError: 'ConcatOp : Ranks of all input tensors should match: shape[0] = [0] vs. shape[1] = []'
I have experienced another error (TF.NET version 0.70.1) with boolean_mask:
tf.boolean_mask(new int[] { 0, 1, 2, 3 }, new bool[] { true, false, true, false })
Throws:
Tensorflow.InvalidArgumentError
HResult=0x80131500
Message=Shape must be rank 1 but is rank 0 for '{{node All/boolean_mask/concat}} = ConcatV2[N=3, T=DT_INT32,
Tidx=DT_INT32](All/boolean_mask/strided_slice_1, All/boolean_mask/Prod, All/boolean_mask/strided_slice_2,
All/boolean_mask/concat/axis)' with input shapes: [0], [], [0], [].
Source=Tensorflow.Binding
StackTrace:
at Tensorflow.ops._create_c_op(Graph graph, NodeDef node_def, Tensor[] inputs, Operation[] control_inputs, OpDef op_def)
at Tensorflow.Operation..ctor(NodeDef node_def, Graph g, Tensor[] inputs, TF_DataType[] output_types, ITensorOrOperation[] control_inputs, TF_DataType[] input_types, String original_op, OpDef op_def)
at Tensorflow.Graph.create_op(String op_type, Tensor[] inputs, TF_DataType[] dtypes, TF_DataType[] input_types, String name, Dictionary`2 attrs, OpDef op_def, Boolean compute_device)
at Tensorflow.OpDefLibrary._apply_op_helper(String op_type_name, String name, Dictionary`2 keywords)
at Tensorflow.Contexts.Context.ExecGraphAction(String OpType, String Name, ExecuteOpArgs args)
at Tensorflow.Contexts.Context.ExecuteOp(String opType, String name, ExecuteOpArgs args)
at Tensorflow.gen_array_ops.concat_v2(Tensor[] values, Int32 axis, String name)
at Tensorflow.array_ops.concat(Tensor[] values, Int32 axis, String name)
at Tensorflow.array_ops.<>c__DisplayClass4_0`2.<boolean_mask>b__0(NameScope <p0>)
at Tensorflow.Binding.tf_with[TIn,TOut](TIn py, Func`2 action)
at Tensorflow.array_ops.boolean_mask[T1,T2](T1 tensor, T2 mask, String name, Int32 axis)
at Tensorflow.tensorflow.boolean_mask[T1,T2](T1 tensor, T2 mask, String name, Int32 axis)
different solutions, one could be this(in boolean_mask function):
var _leading_size = gen_math_ops.prod(shape(tensor_tensor)[$"{axis}:{axis + ndims_mask}"], new[] { 0 }) ; var leading_size = array_ops._autopacking_conversion_function(new[] { _leading_size }, _leading_size.dtype, "");