Added missing virtual destructors
When we have classes that may be derived in c++ it is important to always put a virtual empty destructor, in particular on abstract classes, because this can lead to memory leaks.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
1cec094) 95.53% compared to head (c37cff5) 95.54%.
Additional details and impacted files
@@ Coverage Diff @@
## master #218 +/- ##
=======================================
Coverage 95.53% 95.54%
=======================================
Files 16 17 +1
Lines 1075 1077 +2
=======================================
+ Hits 1027 1029 +2
Misses 48 48
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
It is enough to add virtual destructor to Print, because Stream, Client, Server, HardwareI2C and HardwareSerial are inherited from Print
with virtual ~Print() {}, Blink with SAMD 1.8.13 takes 150 bytes more flash memory (11608 - 11448) and 4 bytes more SRAM.
WiFiNINA WiFiWebServer example takes 160 bytes more flash (20484 - 20308) and has no change in reported global SRAM usage.
@JAndrassy I think that removing the virtual destructor from the classes derived from Print may lead to the same issue when referencing, for example, a derived class of Client with a Client* pointer, which is the reason why I opened this PR.
@andreagilardoni I think the existence of virtual destructor in the top class of the hierarchy is enough. it forces the runtime to look into the table of virtual functions and execute the right destructor (as it works for any other virtual function)
@JAndrassy I tried that and you are right. Below you can find how I checked that, for future reference. I will then delete the virtual modifier from derived classes
#include <iostream>
class A {
public:
virtual ~A() { std::cout << "destroyng A" << std::endl; }
virtual void foo() = 0;
};
class B: public A {
public:
~B() { std::cout << "destroyng B" << std::endl; }
void foo() { std::cout << "foo B" << std::endl; }
};
class C: public B {
public:
~C() { std::cout << "destroyng C" << std::endl; }
void foo() { std::cout << "foo C" << std::endl; }
};
int main() {
std::cout << "example 1" << std::endl;
A* a = new C();
a->foo();
delete a;
std::cout << "example 2" << std::endl;
B* b = new C();
b->foo();
delete b;
return 0;
}
@andreagilardoni Print should have the virtual destructor, not Stream
The crucial point about virtual destructors is that if you delete an object of a derived class using a base class pointer, the base class must have a virtual destructor, or you will have undefined behavior. Which can have any implications really, depending on what the compiler chooses to do - not just, or even, memory leaks. Compilers can emit very strange machine code for undefined behavior, and the emitted code can vary a lot depending on many different factors that are hard to predict. So even thinking it's ok because you tested and nothing went wrong is a false assumption.
I think the rule about adding a virtual destructor if there's a virtual function arose from the Joint Strike Fighter project's coding standard (at least there's where I saw it first), but you can have undefined behavior even without virtual functions. It's not a matter of what is logical either, but a matter of the compiler writers being free to do whatever they like that makes their lives easier, when the compiler encounters undefined behavior.
So you can use one or both of the following rules:
-
Always add a virtual destructor to classes that can be inherited from, in case someone does something stupid in the future.
-
Always check that there's a virtual destructor present when you delete using a base class pointer (and preferably also comment that you checked it in your code, for people reviewing the code). Which is a more risky rule, since people might not know about it.
The relevant part of the standard is 5.3.5 point 3 for C++11, for example.
The crucial point about virtual destructors is that if you delete an object of a derived class using a base class pointer, the base class must have a virtual destructor, or you will have undefined behavior.
Not exactly. For example, in:
class A {
public:
~A() {
}
};
class B : public A {
public:
[virtual] void fn() {
}
};
We must add the virtual keyword above to get the undefined behaviour warning from gcc. It doesn't show without it.
Namely, there is absolutely no problem in having class inheritance without any virtual destructor (though the destructor of the inheriting class might not be called then if there is no virtual destructor on the base class and the delete function is called on a pointer to the base class, but that's another issue).
So, undefined behaviour is related to having any virtual function, not to inheritance.
Ex. Print does absolutely need a virtual destructor (currently has virtual functions). String does not (currently has no virtual functions).
That doesn't contradict it being undefined behavior. There's no requirement in the standard to issue a diagnostic message for undefined behavior, neither is it forbidden to do so (see 1.3.24). Which means that a compiler may give you a warning about undefined behavior, or it may not. For sure, your example with the virtual keyword is an example of undefined behavior, and GCC is nice enough to warn you even though it's not required. But that doesn't mean that cases where it doesn't emit warnings aren't undefined behavior. The only way to determine if it's undefined behavior or not is to compare the code with the wording of the standard.
And 5.3.5 point 3 of the standard makes the case in this issue undefined behavior. Undefined behavior doesn't require your compiled code to break - it just allows the compiler writers to do whatever is convenient for them. They may do that in a way that makes the code work flawlessly on a particular compiler version. And then they might not decide to issue a warning for that code even though it contains undefined behavior. But you can never rely on a specific compiler version - even combined with a huge amount of testing of the generated code - to determine that there's no undefined behavior present. Even a future version of the same compiler might behave differently, and then, all that's required is a recompile to have the latent bug manifest itself. You might get lucky that they add a warning when they make the change, or they might not, because they're not required to by the standard.
The missing destructor also leads to linker errors. I'm facing this exact issue when e.g. cross-compiling with newer gnu compiler versions gccarmnoneeabi code for nrf52 target on a raspberry pi.