José G. Quenum

Results 18 comments of José G. Quenum

Let me try again and provide you with more context. cheers On 9 Aug 2011, at 21:59, bblimke wrote: > No. Are you able to give more context? > >...

Thanks for your response. I followed your suggestion but I am still getting the same error. This is what I did. I downloaded spark-sql-kafka into a folder. Let's call it...

Below is the full stack trace of the error I get Exception in thread "main" org.apache.spark.sql.AnalysisException: Failed to find data source: kafka. Please deploy the application as per the deployment...

Stack trace Here is what happened, the most recent locations are first: geterror() @ core.jl:544 _jcall(::JavaCall.JavaObject{Symbol("org.apache.spark.sql.streaming.DataStreamReader")}, ::Ptr{Nothing}, ::Type, ::Tuple{}; callmethod::typeof(JavaCall.JNI.CallObjectMethodA)) @ core.jl:482 _jcall(::JavaCall.JavaObject{Symbol("org.apache.spark.sql.streaming.DataStreamReader")}, ::Ptr{Nothing}, ::Type, ::Tuple{}) @ core.jl:475 jcall(::JavaCall.JavaObject{Symbol("org.apache.spark.sql.streaming.DataStreamReader")}, ::String,...

It keeps throwing the same error. Suspecting that the added jar was not propagated, I created the same folder on each worker node and added the jar. But I am...

I am running spark in a standalone mode on a 3-node cluster. I have one master and two workers. Each spark instance is on a different node. The kafka instance...

I made a few changes. I reverted the session back to spark.jars as follows: ```Julia spark = SparkSession.builder.appName("SoftwareTools").config("spark.jars", "/path/to/spark-sql-kafka-0-10_2.12-3.5.2.jar").master("spark://IP:7077").getOrCreate() ``` I also upgraded the version of scala to 2.12.2 to...

I made the suggested change. But my code is in a Pluto notebook. Do you think it will rebuild automatically if I restart the notebook? Spark is not available in...

After following the suggestion above and successfully building the Spark package, I am now observing two types of error: 1 - When I use the `spark.jars.packages` config option I get...

I added the Kafka clients package and it finally worked. Thanks for your assistance on that. Now I am getting an error with the `df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")`...