Security through obscurity is no security at all. Obscurity does indeed add an extra level of protection and should not be ignored altogether.
When creating an application, a vendor should follow all best practices to protect their application/code from “reverse-engineering, tampering, invasive monitoring, and intrusion” (Source: OWASP.org). OWASP calls this “Application Hardening and Shielding”. Read more about that here: https://www.owasp.org/index.php/Application_Hardening_and_Shielding. The implications of not doing so can not only lead to the theft of intellectual property (i.e. the source code), but it can also bring grave harm to customers and users of the application.
When a vendor is creating an application, especially a security application, consider how its core components are assembled (from a code perspective):
No application/binary can be easily protected against disassembly (machine code to assembly). When it comes to a security product, though, one should really consider using a language which does not allow an entry level attacker to recover the original high-level source code of binaries using script kiddie level tools such as ILSpy, jd[-gui], etc. The number of attackers who can read assembly as efficiently as Java or C# should also be given consideration. It’s probably less than 1%, so raise the bar!
Although I am coming at this from a security perspective, do not forget about the typical performance difference between machine compiled, JIT compilations, and scripts. Machine compiled binaries are typically more efficient for thick applications. So, even if your application will be open source, you will still want to consider this.
After following all of the OWASP recommendations (ex. detecting debug attempts), ensure you digitally sign it, and ensure all processes interacting with your application verify that signature. This last step is by far the most vital step, even if you are not concerned with your source code (ex. open source).