UPDATE Cassandra table using spark cassandra connector

I'm facing an issue with spark cassandra connector on scala while updating a table in my keyspace

Here is my piece of code

                        " SET a= a + " + b + " WHERE x=" +
                        x + " AND y=" + y +
                        " AND z=" + x


val KeySpace    = new CassandraSQLContext(sparkContext)


When I execute this code, I'm getting an error like this

Exception in thread "main" java.lang.RuntimeException: [1.1] failure: ``insert'' expected but identifier UPDATE found

Any idea why this is happening? How can I fix this?


The UPDATE of a table with counter column is feasible via the spark-cassandra-connector. You will have to use DataFrames and DataFrameWriter method save with mode "append" (or SaveMode.Append if you prefer). Check the code DataFrameWriter.scala.

For example, given a table:

cqlsh:test> SELECT * FROM name_counter ;

 name    | surname | count
    John |   Smith |   100
   Zhang |     Wei |  1000
 Angelos |   Papas |    10

The code should look like this:

val updateRdd = sc.parallelize(Seq(Row("John",    "Smith", 1L),
                                   Row("Zhang",   "Wei",   2L),
                                   Row("Angelos", "Papas", 3L)))

val tblStruct = new StructType(
    Array(StructField("name",    StringType, nullable = false),
          StructField("surname", StringType, nullable = false),
          StructField("count",   LongType,   nullable = false)))

val updateDf  = sqlContext.createDataFrame(updateRdd, tblStruct)

.options(Map("keyspace" -> "test", "table" -> "name_counter"))


 name    | surname | count
    John |   Smith |   101
   Zhang |     Wei |  1002
 Angelos |   Papas |    13

The DataFrame conversion can be simpler by implicitly convert an RDD to a DataFrame: import sqlContext.implicits._ and using .toDF().

Check the full code for this toy application: https://github.com/kyrsideris/SparkUpdateCassandra/tree/master

Since versions are very important here, the above apply to Scala 2.11.7, Spark 1.5.1, spark-cassandra-connector 1.5.0-RC1-s_2.11, Cassandra 3.0.5. DataFrameWriter is designated as @Experimental since @since 1.4.0.

I believe that you cannot update natively through the SPARK connector. See the documention:

"The default behavior of the Spark Cassandra Connector is to overwrite collections when inserted into a cassandra table. To override this behavior you can specify a custom mapper with instructions on how you would like the collection to be treated."

So you'll want to actually INSERT a new record with an existing key.

Need Your Help

asp.net autocomplete and IE autocomplete

c# asp.net autocomplete ajaxcontroltoolkit

So I have an asp:AutoCompleteExtender which works fine but is there a way to actually turn off the build in autocomplete from Internet Explorer because when I input text into a textbox both autocom...

View Xml in Awesomium

c# xml awesomium

In Google Chrome, when you open an xml file, you get a formatted (pretty) view of the xml if there is no stylesheet referenced in the xml file itself.