Hi, this is Ed Maguire, Insights Partner at Momenta Partners, with another episode of our Edge Podcast, and today our guest is SK Reddy who is the Chief Product Officer in AI at Hexagon, he an entrepreneur and a technologist, and we’re going to dive into artificial intelligence, machine learning, and a whole bunch of other topics in this conversation, so really looking forward to it. SK, thank you so much for joining us today.
Thank you, Ed. Thank you for giving me the opportunity, I really appreciate it, and hopefully I have some information that your listeners would be interested in listening to.
Well, there’s no doubt about that. First, I would love to get a bit of context about what has shaped your view of the artificial intelligence market, and what in your experience has brought you to where you are today.
I’ve been a techie all through my life, and after I did my double masters, one is sociology, and one in computer science, I’ve been working in tech industry and I’ve been developing a lot of solutions. In the last 10-years I did two startups and I also worked for Apple. Whilst doing these things I have developed an interest to find out how to solve the problems that are affecting the humankind, especially using technology, how do I make it easier, safer, simpler, more efficient for human beings to live on this earth and preserve the ecosystem.
My two start-ups were in that direction, and some of the use cases I developed in Apple were in that direction, so I think it looks like a natural course of action for me to continue to work in that, and that’s where I am right now. So, the past has influenced me to start doing something nice for this earth.
How do you think about artificial intelligence, how would you define it, and are there some unique aspects of your experience that really color your view of the market?
Artificial intelligence; we have been hearing these words a lot in the last few years, and there are different schools of thought, one of which don’t believe in it, and one who are suspicious, and the other I think is completely in favor. In my opinion artificial intelligence is that collection of knowledge, especially mathematical and statistical techniques, using and applying on lots and lots of data available in the organizations and with human beings today and extracting additional wisdom. That’s what I would say is artificial intelligence.
The sci-fi movies in the last few years have not helped AI because there seems to be a misconception that AI is going to take over the world, that’s not the case. But in my opinion and in a very simple language, AI is that set of techniques that you can use to solve complex problems which were no more solvable using some of the erstwhile techniques like business intelligence and analytics. That’s where I think AI comes to the rescue.
Where do you apply AI? It would be interesting if from your perspective if you could share a little bit of the work that you do on a daily basis, how do you look at the research that’s going on in this space, and evaluate what’s out there?
You asked me where do you apply AI, and how do you evaluate AI. AI could be applied in each and every aspect of human life, every type of industry, and every type of company. I would say any task which used to be done by any type of technology could be done better using AI, and any of those tasks which were initially found to be almost impossible to be done, can be done by AI now efficiently and effectively.
So, the focus areas we have, my kind of focus is to focus on Industry 4.0 type of use cases, so any manufacturing, industrial context, construction, power, ship construction, building factory construction, mining, agriculture, geo-spatial systems, any type of Industry 4.0 solutions we are applying AI, but it could be applied to other more popular areas like financial services, insurance services, healthcare, and so many other areas too. Obviously, things like automatic face detection, or autonomous cars have more popularized AI, including the robotics, but I would say nothing to worry, there’s a lot more research to happen, but AI is that summary of techniques that have been existing in statistics, which could be used to extract more wisdom from the data.
How do you go about ensuring when you start to apply technologies, that you’re really using the appropriate techniques? There’s a big grab-bag of technologies and approaches, and it’s not necessarily clear what the best application is. How do you think about employing the right technology, and fitting it to get the best results from a business problem?
Even though AI is more of a science, identifying the right technique from amongst a bunch of techniques available is still an art. This may sound a little funny but many of the AI scientists do have access to so many different tools, techniques, methods, methodologies and frameworks, but identifying the right technique, the right framework, the right parameters, and the right type of architecture still happens to be more of an art. If you read some of the technical papers being published, people of course only talk about the successful stories, they don’t talk about the failure stories, and I wish they did; but many of these successful stories when you look carefully, the techniques they are choosing are from hundreds or even thousands of techniques available.
For example, in machine learning, especially the area of study called predictive modelling there are apparently around 5,000 known statistical techniques available, that could be applied in the context of machine learning to extract wisdom, for example predicting man and machine is going to fail, or computing that remaining use for life, or so many similar situations, but no single human being can effectively remember or understand how to implement each of those 5,000 techniques. Of course, the frameworks being developed by some of the innovative companies are really helping the developers to develop a solution faster using any of the techniques, but real intelligence, real critical thinking, and real art is to identify the right technique.
So, thankfully, the open-sourcing world, the mentality and attitude of publishing papers, and the positive approach amongst research organizations, universities, and companies to collaborate them and work together is helping reduce the hit and trial approach by developing some scientific methods, but AI, especially defining the right technique within AI still in my opinion is an art. It takes a lot of critical thinking amongst AI scientists, and a good amount of discussion amongst such scientists.
The technology as you mentioned has been around for a long time, but it’s only been in the last few years where we’ve really seen I guess a widespread adoption, or accelerating adoption. Why was that, why did AI go through the AI winter in the early stages, and can you point to any factors that have significantly changed that have been catalysts for adoption of AI technologies?
The techniques being used today did exist I think even 30-40 even 50 years ago, but I think the tremendous amount of acceleration you see in AI adoption or AI solutions, are because of a few factors:
- The cheaper hardware that’s used to compute large amounts of data.
- The cloud servers available, people don’t have to own machines anymore, they can just rent the machines, that’s another innovative idea that came out in the last 10-years.
- The easier availability of datasets, especially in predictive modelling, image processing, or natural language processing, all these areas need a huge amount of data, and many of the research institutions, universities, and companies are more willing to share this data, and that’s another factor which is helping.
- Last but not least, some of the initial successes shown by maybe I think the tech companies implementing AI in certain social media contexts, has really proven to the executives of companies, and also the Heads of States, to develop more confidence in AI, and implementation of AI solutions.
These are some of the factors, but at the same time of course people talk about AI winter, there was a little bit of a lull period for the last 20 years, but I feel there is still a little bit of a lull going on in the sense that there could be more excitement, more energy, more supply of resources, more used case, more data, more innovative techniques that could come in to make AI a common day-to-day activity. Right now, I think AI is still sort of an alien subject for many-many people, because you don’t have enough talented people at this point in time, but I would imagine that in a couple of years things will accelerate, then I would definitely say the AI winter has gone, now I can see the spring coming in, and then eventually the summer is going to come.
Are there some factors do you think which could help ease that shortage of skilled AI people? I think that’s certainly a constraint on adoption, but what could help to ease the shortage, or make the technology simpler to use?
Good question, it’s a little complicated question, I guess. Because of the demand-supply, the demand is pulling even the basic forces of supply, let me explain what that means. There are instances where university students are quitting their education halfway through, and joining companies because they have learnt part of the AI knowledge, and that seems to be quite sufficient for the company which is trying to hire them, and they’re actively encouraging them to quit and then come to work, then they can always go back some time later, which I don’t know when it’s going to happen.
Another situation is, I’m told many universities are running short on the faculty, because they think AI experts would rather work in a company and solve exciting challenges, because they have access to good datasets and good computing power, and of course money is not bad too compared to the universities, and that’s not helping.
Now, your question is, how do we solve it? I don’t know, I think over time the gap will reduce between the demand and supply, but I think investment from various governments, various countries, and these I think would help. Also, many of the universities which are extremely rich with endorsements, I think they could also invest more in creating labs and encouraging the faculties to stay back, and encouraging students to complete the courses which will also help them.
I do have a little more of a fundamental suggestion, many high schoolers, the 11th and 12th graders I think will also start learning programming, and start learning some critical thinking techniques, which will make it easier for them when they enter graduation studies; they could quickly pick up AI, hence the availability of resources also would be a trust in that sense. But there’s no other technique, I don’t want to get into the topic of visas and immigration, but the governments need to invest in creating those sources internally by making it easy for students to join these courses.
Yes, there’s no doubt. We highlighted in a recent webinar that there’s been an enormous amount of interest in AI on the national level; I think Germany’s trying to catch up with other countries, because it feels like its falling behind, particularly with the application of AI to Industry 4.0, and of course you have many billions of dollars being invested on the military side. Clearly it would be great to have more engagement and more ease of use.
When you look at either events, developments, or people in the market that have had a big impact, particularly on your views of the market, are there any that stand out?
There are of course similar to social media and the regular media, and AI too has its own share of celebrities, and I do adore some of the celebrities who have really created a fantastic AI techniques resolutions; at the same time some fantastic innovative companies have invested resources, developed techniques, and developed datasets for the common people to start adopting AI in their respective areas of work. But I think instead of mentioning the names of companies or the names of people, I would rather mention a couple of techniques that have definitely accelerated the AI adoption in the world.
First, one of the top techniques which really accelerated was the invention or adoption of CNNs, Convolutional Neural Networks to process images, that was done in 2012. Since then Neural Network Architectures based in CNNs have tremendously improved the accuracy rates. CNN-based image processing models can accurately predict an image even as high as 99.95 to 99.98, which is amazingly better than even human beings. That’s one technology update.
The second is, I remember even as recently as four or five years ago, when you run a neural network model, there is a concept called back-propagation. Backpropagation is the way neural network learns. Every time it makes a mistake, or every time it predicts the right answer, it learns what made it choose the right answer, and what make it choose the wrong answer, that’s called backpropagation. Until three or four years ago it was a cumbersome process, and it was a complex process for developers, writing software programs that do the backpropagation, it was not an impossible task, but it was a difficult one. Sometimes if you made an error it was difficult to even catch that you’d made an error on the backpropagation.
But in the last two years, some of the frameworks that have been released to open source and have addressed the backpropagation in such a way that backpropagation now is automatic, you don’t even need to write a single line of code to make the neural network learn using backpropagation, it is such a tremendous boost in the productivity. I remember two or three years ago I wrote an open sourced simple sentiment analysis solution using neural networks, and when I had to do it three or four years ago I had to spend a lot of time, and writes lots of lengths of codes just for backpropagation, and because just two years ago when I was doing it there was no need for back propagation, the entire sentiment analysist model on a small dataset of around 10,000 ft, totaled 23 lengths of code, which is a tremendous boost in the productivity. So, that was another technique which made a tremendous impact.
Topics like GANs, Generative Adversarial Networks, and reinforcement learning is another technique, has tremendously improved the way models can learn such complex, intricate, confusing tasks for human beings, and extract wisdom out of it. These are some of the AI history impacting techniques that were invented in the last three or four years, hence they also in turn were some of the reasons why the AI production is accelerated, because people feel it easier to adopt and easier to develop a neural network.
GANs I’ve heard applied to security in particular, could you explain a bit about what characterizes a GAN setup, or implementation as it were?
Certain use cases in neural network when you want to detect fraud, or when you want to identify images more accurately, there were not enough examples to show to the model what is a right image, or what is a right transaction, and what is a fraud transaction, or what is a wrong image. The concept of GANs was nothing else but two independent neural networks, one neural network creates some image, and tells the other neural network, ‘Oh, this is the right image’, and the second neural network tries to find out whether this is the right image or not. The response of the second neural network is given to the first neural network which says, ‘Huh, so you figured out that I gave you the wrong image. So, I’m going to create a little more complex, and little more sophisticated image which is even more difficult for you to figure out it’s a fake image, or a wrong image’.
This back and forth, the duel goes on and eventually the neural network becomes an extremely intelligent network to create a fake, and to detect a fake. So, these GANs are being used in many-many used cases, financial services, healthcare, image processing are being used. I personally believe the real potential is yet to be tapped, even though there are fantastic GAN space solutions available, I do see a lot of innovation and creativity within GANs where there are newer variations of GANs coming out by having new names like BigGAN, CircleGAN, CycleGAN, there’s so many different GANs coming in, and I believe I think we are not even one person off from the potential of GANs.
That’s impressive, I guess what strikes me is that the technology itself is evolving to the point where it can almost improve itself. I guess, deep learning was a key breakthrough in that regard where you need less and less programming involved, or you need to write less code, but as you look at the market are there any broader forces, or bigger secular forces that you see as accelerating the maturity of the technology, and also enabling adoption more easily?
There are many forces happening, one is the willingness of innovative companies to open-source the solution, and the willingness of universities and the innovative companies to open source their data-sets and publish their innovative solutions in AI. More and more universities are offering courses in AI, hence creating more and more AI engineers so that they could address more. Many heads of companies, and heads of governments are becoming convinced on the potential of AI, and many governments like you mentioned Germany, and of course there are so many other governments having to introduce policy documents, which will encourage and support the adoption of AI by investing government money and making it easier for companies to invest in these areas.
It's almost like a snowballing effect, I think the more the companies are adopting, the more success stories are being heard in the news media, and the more universities are willing to create more students it’s creating a snowballing effect where a lot more companies are ready to hire, and a lot more students are ready to learn AI, and getting into it.
Also, the frameworks as I mentioned earlier, is making it easy to develop solutions compared to let’s say two or three years ago. Some of the hardware companies which are inventing better, faster, cheaper GPUs, or even GPUs, or FPGAs, so custom developed hardware is also facilitating the acceleration of AI. The combination of availability of data, availability of techniques, availability of frameworks, and good computing infrastructure is helping both the universities, companies, and countries to adopt AI.
Hopefully some of those supply of GPU chips will free-up now that the crypto-mining market has crashed!
I wanted to turn to applications, and one of the areas we’re focused on at Momenta is industrial IoT, I would love to get your perspective on some of the ways that applied AI and machine learning can really drive meaningful impact across industrial IoT used cases.
Similar to digital transformation, or any dramatic transformation in the last let’s say 100 to 200 years of the industrial world, AI has been adopted more actively by certain industries like high tech, financial services, healthcare, and there are certain industries which are very slow in adopting, and unfortunately that happens to be Industry 4.0. Many industries of companies let’s say in manufacturing, mining, agriculture, any factories, any of these Industry 4.0 solutions, the companies are a little slow in adopting AI. My current focus is making it easier for organizations to identify those business challenges that could be converted into AI challenges, and identifying easier and simple solutions that can be the so-called low hanging fruit, which can immediately be implemented. Also, identified data within the organization, or sometimes get the data from a third-party organization, or sometimes maybe synthetically creating dataset that will go in creating a solution, or being advocated.
In Industry 4.0, one of the most common used cases which is more regularly solvable is doing predictive modelling of the machines; for example, when my machine is going to fail, whether it’s a massively expensive truck, a machine in the factory, or even a very simple mechanical device that you are using in a construction site, companies want to find out when a machine is going to fail, or when a component is going to fail, so that you can do some preventive care, or maybe you can do other mitigation activities so that your work doesn’t get affected. That’s one of the most common used cases, there are good solutions available, good datasets available that could be solved, that’s where I think the Industry 4.0 is going right now.
I would not see any dramatic difference in the adoption rate of Industry 4.0 companies in the new technology like AI, compared to any other new technology. The cycles are longer there and I’m aware of that. But as a personal mission I’ve been talking to various CXO level people, and I do understand the risk appetites and the investment opportunities there in the Industry 4.0, but when I talk and explain about some other simpler use case that can be solved, I do see the willingness in the CXOs to adopt AI in Industry 4.0.
Certainly, there’s a lot of low-hanging fruit. I wanted to circle back to financial services and healthcare, at least as a starting point to discuss the issue of algorithmic bias, or ethical issues in AI. There’s quite a bit of debate around that, I think we saw a number of very well-known scientists and researchers come together and draft the 23 Asilomar principles really to guide artificial general intelligence. But I’d love to get your perspective on how to think about ethics, how to apply ethics, particularly when you’re looking at applied AI as it concerns data about people, of which there is so much floating around on the internet.
Ethics and bias are two massively complex topics within AI. Let me talk about bias first, then I’ll come to ethics, and connect them together.
AI models will learn based on the data you give, and sometimes even though the models are not making a mistake, but just because the data that you’re feeding in was historical data which has lots of biases going in. The model also behaves in the same biased fashion, and there are a good number of industry examples, even from companies which are otherwise innovative companies make tremendous mistakes when they did not really pay attention to the bias data that’s coming in.
Bias introduced because of data, bias introduced because of lack of a smart neural network architecture, or bias introduced because of not being able to pick the right mission learning technique are some of the factors I see, they’re reflected in products either failing or products not being accepted for the user community, because they just don’t seem to be performing well, especially some of the edge cases or the cornered cases which reflect bias, and bias is definitely a huge issue. In the AI world bias has not got the attention of what it deserves until two years ago, and in the last year or two more and more research institutions, universities, companies, and AI celebrities are emphasizing the importance of discussing bias, and data science when data science are being taught about how to process the data, and what data to consider. An additional complex task that needs to be taught to data scientists is what data needs to be dropped, because if you’re aware of bias then you can look at the possibilities of bias coming in during the data.
Many people may not know this, I was doing a research on biases, how many biases exist, I’m told there are 175 scientifically proven human biases that exist, and those exist in every human being, they just don’t know them. I went to Wikipedia and did all the research, and also talked to so many people, there is so much of bias which all of us human beings have, it doesn’t matter whether you’re a male or a female, which country you come from, or what age group you belong to, people have their own set of bias, and people are to be extremely careful when they’re designing models with the data, and the architecture that may let the bias creep in.
Ethics is another important and complex factor. What is the right application, the used case that you can use? Can you use AI-based applications to further endanger humanity? For example if an AI application is supporting discrimination, or you are unknowingly giving lesser opportunities for certain age-groups, or certain gender of people, that’s because the ethics are not really being thought through, and again similar to bias does not get its attention due until the last two years. In the last one or two years I think a lot more people are talking about ethics, and how to develop products which are ethically safe, and that could apply not just based on the developed countries, versus developing countries, or men versus women, or the rich or the poor, but I think everyone has to contribute to the ethical discussion.
There are of course sporadic successes in discussions about biases, and about ethics, but I think we have not reached a stage where we have a uniform understanding, we have a uniform maybe standard guideline in how to avoid bias, and how to comply with the ethics. But I think a lot more needs to be done, and I would encourage all the AI celebrities, people with strong opinions and opinion leaders to come forward and discuss these two more, so that we don’t have to learn what mistakes we’ve made after 10 years of deploying AI. I would rather encourage them to start discussing these two important and complex topics of ethics and bias, even when AI itself is in its infancy, so that you’ll find a very healthy, useful AI gets developed in due course.
I think that’s really going to be critical. Of course AI has been a topic that has seen an enormous amount of hype and misinformation, or mischaracterization, a lot of it is dystopian, but how would you counsel a company that’s looking to implement or test out AI, to sort through maybe some of the unrealistic expectations? How do you cut through the hype and figure out what’s real and arrive at used cases, and a strategic road map for implementation, that’s going to be realistic and result in the best outcomes?
I have spoken on this topic in a few conferences, and my YouTube channel has some videos where I talk about how to adopt AI in organizations, especially addressing the CXOs.
Like any major technology transformation topic, or if a company has gone through some digital transformation initiative, AI transformation, AI adoption is no different. Organizations have to figure out why they want to adopt AI, AI does not solve all the problems, so there has to be proper thoughtful action before AI is adopted. At the same time, organizations historically whenever they adopt a new technology, they always do a proof of concept, or a pilot of adoption. Some companies have done that, some companies have taken the opposite approach of doing a big bang, companies have to have a proper strategy and identify what areas, what business challenges could be addressed using AI solutions. Last but not least, managing the change in the organization, that includes communications, making a plan, communicating the plan, identifying the right talent, training the people, watching carefully impact on the jobs, and of course typically any new technology I think there would be some impact on the jobs, and how companies can carefully plan for that, and then skill and reskill some of the resources affected by that.
So, there’s an entire gamut of activities starting with a strategy, starting with proper critical thinking, getting the opinion leaders in the organization rallying behind AI, identifying the right talent, identifying some of the low hanging fruit that needs to be address, or identifying the biggest bank for buck type of used cases, and carefully implementing it, monitoring it, and saying that the metrics whatever has been identified as part of the plan are being accomplished, is the typical standard method I would encourage companies and CXOs to go for.
Then of course there are so many other good ideas and bad ideas, do’s, and don’ts which are part of my presentation. I encourage your listeners to go to my YouTube channel, that’s SKReddy99 and watch through some of those videos.
We’ll include that resource in our show notes as well. I wanted to just ask as we look forward into the future, how do you see the market evolving? I always like to ask what you’re most optimistic about, and what might be some concerns that keep you up at night?
Let’s talk about the optimistic first, I think we can talk about the concerns later.
- I’m quite optimistic, I think AI is going to solve many challenges and used cases that I think people are having in their workplaces, and their personal lives too. I think AI would make life easier, safer, better, and more productive not for just individual, not just companies, but also, I think countries and communities. This could be applied in agriculture, in mining, in transportation, healthcare, financial services, anything that touches human life. I’m quite optimistic in that.
- I’m also optimistic at the availability of technology becoming cheaper and easier by having more open solutions, the computing infrastructure and datasets availability, and newer techniques being invented which will not demand too much of data. That’s another optimism I carry with me.
- Another optimism is I think because of the growing widespread acceptability of AI, and some examples of certain companies, including like you mentioned Germany, Switzerland, United States, China, and so many other countries coming forward to declare the policy documents in AI, there will be even wider adoption, and some other countries which are not as advanced as some other countries, I think there would be some collaboration across the countries to help each other, and go forward. That’s another optimism I carry with me in terms of AI.
So, digital transformation that came say 10-15 years ago, and helped organizations do business intelligence and analytics, now the next natural action AI being adopted would further help companies I think produce better solutions and help individuals. These are some of the optimisms I carry with me.
- Concerns, I think the cost of infrastructure for AI is even though is falling, it’s not falling fast enough. The availability of datasets even though it’s really improved compared to let’s say even as recent as two or three years ago, especially in the major processing, I know there are a larger number of datasets available, but I think we need to have more and more datasets, more and more companies collaborating with each other and sharing the datasets. Whether it’s in the healthcare services, financial, insurance, Industry 4.0, I think there’s tremendous opportunity there, and that’s where I’m concerned. I think it’s not fast enough.
- Another concern is there’s too much of a demand on more and more companies and countries to see the benefit of AI, but we don’t have enough talent in the market available who could do it. Hence, this demand-supply is really creating a good amount of heartburn amongst companies and countries, so that’s another concern I have.
Other than these, I think I’m a lot more optimistic, and I think AI would solve many-many used cases. There are a large number of used cases, whether it’s in healthcare, Industry 4.0, financial services, insurance services, transportations, I think there’s so many used cases still need to be addressed with the existing technology, without even worrying about the future technology or future innovations, I am so optimistic that we can solve, and we should solve some of these used cases right away.
That’s great. There’s been such great insights come from this conversation. I always like to ask as a final question a book recommendation, and I know you’re a pretty avid reader. Do you have any recommendations you could share for our listeners?
Sure, I do have a couple of books, and before I mention the names of the books, I would like to mention what my father used to tell me all the time when I was growing up, he used to always say, ‘If you don’t read there’s no difference between you and a donkey’! And of course, that’s the stuff my father used to tell me, how important for one to read, and that’s what made me a reader of many non-technology books too.
A couple of books that have really attracted my attention in the recent past, I would like to recommend to your listeners is, one is a book called, ‘The Power of Habit’ by Charles Duhigg, a fantastic book that explains how people form habits, especially bad habits, and he also talks about research and how people can effectively adopt good habits.
Another fantastic book is ‘Outliers’, by Malcolm Gladwell which talks about how people achieve super-performance, and what seems to be the reason for that. Another book by Malcom Gladwell is ‘David & Goliath’, which talks about companies and individuals, and even applies to countries too; some of the strengths an individual or a company believes is a strength, is no more a strength and is actually a weakness. And some of what they think is a weakness is no more a weakness, but its actually a strength, it’s a very intelligent logic and I would urge your listeners to read that book too.
One book which I recently read is titled, ‘Sway: The Irresistible Pull of Irrational Behaviour’, by RA Brafman and his I think, brother Rom Brafman. It’s a fantastic book which explains why human beings make mistakes, even rational human beings make mistakes, and what seems to be the thought process behind that, it would be a good insight for everyone.
I’m told a fantastic leader would have a very high amount of self-awareness, so if any one of you want to become a leader, I think you need to have a good amount of insight about it yourself, which includes…
- What you like. What you don’t like.
- What are your strengths? What are your weaknesses?
- What is it you can do well? What is it you cannot do well?
- What is it you really enjoy? What you don’t enjoy?
This type of insight is extremely important, and the books like Sway, The Outliers, and David & Goliath, and the Power of Habit, I think will definitely help your listeners to become even better leaders.
Well, those are great recommendations, and I can see how well they would be applicable to understanding AI, machine learning, and just general intelligence as it applies to business.
This has been a fantastic conversation. Again, it’s Ed Maguire Insights Partner at Momenta Partners, and our guest has been SK Reddy, who is the Chief Product Officer in AI at Hexagon, and he’s an entrepreneur and technologist. We’ll have a lot of the resources sourced in the show notes, and I recommend everybody go to SK’s YouTube Channel for further insights. So, thank you so much for taking the time once again.
Thank you, Ed, thank you very much, I really appreciate you guys giving me the opportunity to talk to you. I hope your listeners will benefit something from my conversation with you.
Thanks again.
[End]