I'm working on a section of code that has many possible failure points which cause it to exit the function early. The libraries I'm interacting with require that C-style arrays be passed to the functions. So, instead of calling delete on the arrays at every exit point, I'm doing this:
void SomeFunction(int arrayLength)
{
shared_ptr<char> raiiArray(new char[arrayLength]);
pArray = raiiArray.get();
if(SomeFunctionThatRequiresCArray(pArray) == FAILED) { return; }
//etc.
}
I wanted to use unique_ptr
, but my current compiler doesn't support it and the reference count overhead doesn't really matter in this case.
I'm just wondering if anyone has any thoughts on this practice when interfacing with legacy code.
UPDATE I completely forgot about the shared_ptr
calling delete
instead of delete []
. I just saw no memory leaks and decided to go with it. Didn't even think to use a vector. Since I've been delving into new (for me) C++ lately I'm thinking I've got a case of the "If the only tool you have is a hammer, everything looks like a nail." syndrome. Thanks for the feedback.
UPDATE2 I figured I'd change the question and provide an answer to make it a little more valuable to someone making the same mistake I did. Although there are alternatives like scoped_array
, shared_array
and vector
, you can use a shared_ptr
to manage scope of an array (but after this I have no idea why I would want to):
template <typename T>
class ArrayDeleter
{
public:
void operator () (T* d) const
{
delete [] d;
}
};
void SomeFunction(int arrayLength)
{
shared_ptr<char> raiiArray(new char[arrayLength], ArrayDeleter<char>());
pArray = raiiArray.get();
if(SomeFunctionThatRequiresCArray(pArray) == FAILED) { return; }
//etc.
}
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…