PEN Consultants Logo
Don’t Be a Victim: Find your weaknesses before the criminals do. PEN Consultants can help!

Monthly Mentoring

Group Based Mentoring Sessions

  • BLUF: Group of people - both mentors and mentees - all on the same call. Chat with Robert (CEO PEN Consultants), others on the PEN Consultants team, or others outside the team.
  • Non-traditional mentor-type relationships. Instead of 1-to-1 or even 1-to-N, this is N-to-N.
  • Multiple people receive mentorship at the same time.
  • Multiple people are able to provide insight at the same time.
  • The same person is able to both ask for advice/help and give advice/help on the same call.
  • Agenda: None. We lead with prayer, intros, and possibly pick back up on a previous month’s discussion that we did not complete. Other than that, it’s an open floor.
  • Topics: Any - from getting started in cybersecurity to helping with advanced technical challenges, business-related, etc.

To Join Us

If you would like to attend this free session, please sign up here: https://submit.jotform.com/232905636077057.

 

We hope to see you at our next session!

Past Topics Include

  • How to get into pen testing
  • How do you convince a client they need security in general (as in no budget for it) or penetration testing (as in limited budget)?
  • Should I get a 2nd degree in computer science in addition to cybersecurity?
  • Ethical considerations and boundaries for firing someone over something that was never communicated or given a chance for remediation?
  • How to differentiate between a shopper vs buyer
  • What is the primary key to security?
  • How to communicate with someone who is largely unresponsive
  • Dealing with willful ignorance
  • Defining good performance objectives
  • What are the boundaries in regard to artificial intelligence as it relates to both cybersecurity and ministry?
  • What are the risks of using Copilot in Windows and how can those risks be reduced; or should it just be disabled?
  • What protections can be put in place to ensure AI queries cannot return sensitive data that it should not be sharing?
  • Is it possible to trick AI into providing inaccurate information by compromising a source from which it pulls data from?
magnifiercrosschevron-down